Feb 19 15:09:29 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 15:09:29 crc restorecon[4676]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:29 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 15:09:30 crc restorecon[4676]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 15:09:30 crc restorecon[4676]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 19 15:09:31 crc kubenswrapper[4810]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 15:09:31 crc kubenswrapper[4810]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 15:09:31 crc kubenswrapper[4810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 15:09:31 crc kubenswrapper[4810]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 15:09:31 crc kubenswrapper[4810]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 15:09:31 crc kubenswrapper[4810]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.115987 4810 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.123872 4810 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.123903 4810 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.123913 4810 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.123922 4810 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.123931 4810 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.123941 4810 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.123950 4810 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.123958 4810 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.123966 4810 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.123973 4810 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.123981 4810 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.123988 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.123996 4810 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124007 4810 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124017 4810 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124026 4810 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124034 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124042 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124049 4810 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124057 4810 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124065 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124073 4810 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124091 4810 feature_gate.go:330] unrecognized feature gate: Example Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124099 4810 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124107 4810 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124115 4810 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124122 4810 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124130 4810 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124139 4810 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124146 4810 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124156 4810 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124166 4810 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124176 4810 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124184 4810 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124192 4810 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124200 4810 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124208 4810 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124216 4810 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124225 4810 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124236 4810 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124247 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124257 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124266 4810 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124276 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124284 4810 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124293 4810 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124301 4810 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124309 4810 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124317 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124347 4810 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124355 4810 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124363 4810 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124371 4810 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124392 4810 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124401 4810 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124409 4810 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124417 4810 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124424 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124432 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124440 4810 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124447 4810 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124455 4810 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124462 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124469 4810 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124477 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124486 4810 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124496 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124504 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124511 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124522 4810 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.124530 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125571 4810 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125595 4810 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125612 4810 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125623 4810 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125635 4810 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125644 4810 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125656 4810 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125666 4810 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125675 4810 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125685 4810 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125695 4810 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125704 4810 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125713 4810 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125721 4810 flags.go:64] FLAG: --cgroup-root="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125730 4810 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125739 4810 flags.go:64] FLAG: --client-ca-file="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125748 4810 flags.go:64] FLAG: --cloud-config="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125756 4810 flags.go:64] FLAG: --cloud-provider="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125765 4810 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125776 4810 flags.go:64] FLAG: --cluster-domain="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125784 4810 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125793 4810 flags.go:64] FLAG: --config-dir="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125802 4810 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125811 4810 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125822 4810 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125831 4810 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125840 4810 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125849 4810 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125858 4810 flags.go:64] FLAG: --contention-profiling="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125867 4810 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125877 4810 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125887 4810 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125895 4810 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125906 4810 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125915 4810 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125924 4810 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125933 4810 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125942 4810 flags.go:64] FLAG: --enable-server="true" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125951 4810 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125963 4810 flags.go:64] FLAG: --event-burst="100" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125973 4810 flags.go:64] FLAG: --event-qps="50" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125981 4810 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125990 4810 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.125999 4810 flags.go:64] FLAG: --eviction-hard="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126010 4810 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126019 4810 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126027 4810 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126037 4810 flags.go:64] FLAG: --eviction-soft="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126045 4810 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126054 4810 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126063 4810 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126073 4810 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126081 4810 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126090 4810 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126099 4810 flags.go:64] FLAG: --feature-gates="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126109 4810 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126118 4810 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126128 4810 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126137 4810 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126146 4810 flags.go:64] FLAG: --healthz-port="10248" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126155 4810 flags.go:64] FLAG: --help="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126165 4810 flags.go:64] FLAG: --hostname-override="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126175 4810 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126185 4810 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126194 4810 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126202 4810 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126212 4810 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126220 4810 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126229 4810 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126237 4810 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126246 4810 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126255 4810 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126264 4810 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126274 4810 flags.go:64] FLAG: --kube-reserved="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126283 4810 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126291 4810 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126300 4810 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126309 4810 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126318 4810 flags.go:64] FLAG: --lock-file="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126358 4810 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126367 4810 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126376 4810 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126390 4810 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126399 4810 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126407 4810 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126416 4810 flags.go:64] FLAG: --logging-format="text" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126425 4810 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126435 4810 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126443 4810 flags.go:64] FLAG: --manifest-url="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126452 4810 flags.go:64] FLAG: --manifest-url-header="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126463 4810 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126472 4810 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126483 4810 flags.go:64] FLAG: --max-pods="110" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126494 4810 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126504 4810 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126514 4810 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126523 4810 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126533 4810 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126542 4810 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126551 4810 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126570 4810 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126579 4810 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126588 4810 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126597 4810 flags.go:64] FLAG: --pod-cidr="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126606 4810 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126620 4810 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126628 4810 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126637 4810 flags.go:64] FLAG: --pods-per-core="0" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126646 4810 flags.go:64] FLAG: --port="10250" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126657 4810 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126665 4810 flags.go:64] FLAG: --provider-id="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126674 4810 flags.go:64] FLAG: --qos-reserved="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126682 4810 flags.go:64] FLAG: --read-only-port="10255" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126692 4810 flags.go:64] FLAG: --register-node="true" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126700 4810 flags.go:64] FLAG: --register-schedulable="true" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126709 4810 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126723 4810 flags.go:64] FLAG: --registry-burst="10" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126731 4810 flags.go:64] FLAG: --registry-qps="5" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126740 4810 flags.go:64] FLAG: --reserved-cpus="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126749 4810 flags.go:64] FLAG: --reserved-memory="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126760 4810 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126769 4810 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126778 4810 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126787 4810 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126795 4810 flags.go:64] FLAG: --runonce="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126805 4810 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126814 4810 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126824 4810 flags.go:64] FLAG: --seccomp-default="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126833 4810 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126842 4810 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126851 4810 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126860 4810 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126868 4810 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126877 4810 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126886 4810 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126894 4810 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126903 4810 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126912 4810 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126921 4810 flags.go:64] FLAG: --system-cgroups="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126930 4810 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126943 4810 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126951 4810 flags.go:64] FLAG: --tls-cert-file="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126960 4810 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126970 4810 flags.go:64] FLAG: --tls-min-version="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126979 4810 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126989 4810 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.126998 4810 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.127006 4810 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.127015 4810 flags.go:64] FLAG: --v="2" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.127027 4810 flags.go:64] FLAG: --version="false" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.127045 4810 flags.go:64] FLAG: --vmodule="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.127055 4810 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.127065 4810 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127265 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127275 4810 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127284 4810 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127292 4810 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127301 4810 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127309 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127318 4810 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127349 4810 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127359 4810 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127367 4810 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127375 4810 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127385 4810 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127395 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127406 4810 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127414 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127423 4810 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127432 4810 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127441 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127449 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127457 4810 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127465 4810 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127473 4810 feature_gate.go:330] unrecognized feature gate: Example Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127481 4810 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127491 4810 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127502 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127510 4810 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127519 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127528 4810 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127537 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127546 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127554 4810 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127562 4810 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127569 4810 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127578 4810 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127585 4810 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127593 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127601 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127608 4810 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127618 4810 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127627 4810 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127641 4810 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127649 4810 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127656 4810 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127664 4810 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127672 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127679 4810 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127687 4810 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127694 4810 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127703 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127710 4810 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127718 4810 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127726 4810 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127733 4810 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127740 4810 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127748 4810 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127758 4810 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127768 4810 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127777 4810 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127786 4810 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127794 4810 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127802 4810 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127810 4810 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127818 4810 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127826 4810 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127836 4810 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127845 4810 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127853 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127861 4810 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127870 4810 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127879 4810 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.127889 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.127912 4810 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.141753 4810 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.141796 4810 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.141927 4810 feature_gate.go:330] unrecognized feature gate: Example Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.141939 4810 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.141949 4810 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.141959 4810 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.141967 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.141975 4810 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.141983 4810 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.141994 4810 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142005 4810 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142015 4810 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142024 4810 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142033 4810 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142042 4810 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142049 4810 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142057 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142066 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142075 4810 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142083 4810 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142091 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142099 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142108 4810 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142116 4810 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142125 4810 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142133 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142142 4810 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142153 4810 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142163 4810 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142172 4810 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142181 4810 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142188 4810 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142196 4810 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142204 4810 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142212 4810 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142219 4810 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142227 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142235 4810 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142243 4810 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142250 4810 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142260 4810 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142270 4810 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142278 4810 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142288 4810 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142296 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142305 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142314 4810 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142323 4810 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142357 4810 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142366 4810 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142376 4810 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142386 4810 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142395 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142403 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142412 4810 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142421 4810 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142429 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142442 4810 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142450 4810 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142459 4810 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142466 4810 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142474 4810 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142482 4810 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142489 4810 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142497 4810 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142505 4810 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142512 4810 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142520 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142527 4810 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142536 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142544 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142552 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142559 4810 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.142572 4810 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142792 4810 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142803 4810 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142812 4810 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142820 4810 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142827 4810 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142836 4810 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142843 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142853 4810 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142863 4810 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142873 4810 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142883 4810 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142892 4810 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142900 4810 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142908 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142917 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142925 4810 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142934 4810 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142941 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142949 4810 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142966 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142974 4810 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142982 4810 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142990 4810 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.142997 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143005 4810 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143013 4810 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143021 4810 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143029 4810 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143036 4810 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143044 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143052 4810 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143060 4810 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143070 4810 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143080 4810 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143089 4810 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143097 4810 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143105 4810 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143114 4810 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143122 4810 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143130 4810 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143138 4810 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143146 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143182 4810 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143192 4810 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143202 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143210 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143218 4810 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143226 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143235 4810 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143243 4810 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143250 4810 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143258 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143266 4810 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143274 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143281 4810 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143291 4810 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143299 4810 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143307 4810 feature_gate.go:330] unrecognized feature gate: Example Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143315 4810 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143322 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143353 4810 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143361 4810 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143368 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143377 4810 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143385 4810 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143393 4810 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143401 4810 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143409 4810 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143417 4810 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143425 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.143435 4810 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.143447 4810 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.144463 4810 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.150221 4810 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.150376 4810 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.152250 4810 server.go:997] "Starting client certificate rotation" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.152352 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.152644 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-13 17:32:51.458626515 +0000 UTC Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.152762 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.196215 4810 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.198389 4810 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.199312 4810 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.236459 4810 log.go:25] "Validated CRI v1 runtime API" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.278884 4810 log.go:25] "Validated CRI v1 image API" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.281510 4810 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.286816 4810 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-19-15-04-23-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.286864 4810 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.313915 4810 manager.go:217] Machine: {Timestamp:2026-02-19 15:09:31.311285929 +0000 UTC m=+0.793316103 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:60bcb373-142f-4da9-846e-4d055863e63a BootID:7961bd2b-8ad7-4e25-b313-8f82bef01c62 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:65:75:f0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:65:75:f0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:5e:46:55 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:72:d9:1e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:4b:9c:a2 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7c:27:20 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:da:b9:c6:b4:c7:ea Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:56:2b:f3:4a:fa:d8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.314313 4810 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.314541 4810 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.317053 4810 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.317405 4810 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.317467 4810 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.317808 4810 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.317826 4810 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.318466 4810 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.318517 4810 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.318958 4810 state_mem.go:36] "Initialized new in-memory state store" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.319104 4810 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.325649 4810 kubelet.go:418] "Attempting to sync node with API server" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.325718 4810 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.325761 4810 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.325784 4810 kubelet.go:324] "Adding apiserver pod source" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.325830 4810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.330699 4810 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.331859 4810 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.336294 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.336303 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.336465 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.336475 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.341559 4810 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.343721 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.343765 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.343781 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.343794 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.343817 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.343832 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.343845 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.343868 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.343886 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.343901 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.343922 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.343936 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.345130 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.346101 4810 server.go:1280] "Started kubelet" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.346145 4810 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.346488 4810 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.346457 4810 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.347299 4810 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 15:09:31 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.349964 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.350004 4810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.350117 4810 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.350149 4810 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.350406 4810 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.350585 4810 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.352111 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:10:34.266406799 +0000 UTC Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.353460 4810 factory.go:55] Registering systemd factory Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.353491 4810 factory.go:221] Registration of the systemd container factory successfully Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.353715 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.353793 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.353909 4810 factory.go:153] Registering CRI-O factory Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.353954 4810 factory.go:221] Registration of the crio container factory successfully Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.354073 4810 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.354108 4810 factory.go:103] Registering Raw factory Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.354129 4810 manager.go:1196] Started watching for new ooms in manager Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.354818 4810 manager.go:319] Starting recovery of all containers Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.354994 4810 server.go:460] "Adding debug handlers to kubelet server" Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.355770 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="200ms" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.390543 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391015 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391044 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391070 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391091 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391112 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391143 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.388012 4810 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895ae639fcb2989 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 15:09:31.346045321 +0000 UTC m=+0.828075485,LastTimestamp:2026-02-19 15:09:31.346045321 +0000 UTC m=+0.828075485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391225 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391264 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391290 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391359 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391392 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391423 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391454 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391505 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391529 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391554 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391579 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391605 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391631 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391663 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391696 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391724 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391750 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391787 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391818 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391854 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391885 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391913 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391942 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.391976 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392010 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392043 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392069 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392095 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392115 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392177 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392202 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392225 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392249 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392271 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392292 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392365 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392391 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392415 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392436 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392459 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392479 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392502 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392523 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392543 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392567 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392600 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392661 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392682 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392705 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392727 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392748 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392768 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392789 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392811 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392864 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392885 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392907 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392927 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392947 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.392998 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393016 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393038 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393061 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393061 4810 manager.go:324] Recovery completed Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393090 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393122 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393151 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393179 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393209 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393237 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393266 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393291 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393317 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393380 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393413 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393440 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393466 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393490 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393513 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393534 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393557 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393585 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393607 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393627 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393651 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393679 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393708 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393734 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393762 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393788 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393817 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393846 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393873 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393899 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393948 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.393976 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394005 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394035 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394075 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394099 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394119 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394143 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394165 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394246 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394273 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394294 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394314 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394460 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394500 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394529 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394560 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394586 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394616 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394649 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394676 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394705 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394736 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394763 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394792 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394817 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.394849 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.397870 4810 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.397919 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.397944 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.397965 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399067 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399142 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399171 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399211 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399236 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399261 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399282 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399305 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399351 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399374 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399401 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399428 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399451 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399473 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399495 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399518 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399542 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399568 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399590 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399611 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399638 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399664 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399687 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399709 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399731 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399755 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399783 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399805 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399827 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399849 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399872 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399917 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399942 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399965 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.399987 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400010 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400035 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400058 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400081 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400105 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400128 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400152 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400176 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400198 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400221 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400248 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400275 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400299 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400406 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400432 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400457 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400480 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400504 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400528 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400554 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400580 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400602 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400624 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400648 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400670 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400694 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400717 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400738 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400762 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400786 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400812 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400836 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400859 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400883 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400905 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400927 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400952 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.400977 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.401001 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.401025 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.401048 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.401072 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.401094 4810 reconstruct.go:97] "Volume reconstruction finished" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.401111 4810 reconciler.go:26] "Reconciler: start to sync state" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.411705 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.414695 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.414765 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.414791 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.425522 4810 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.425558 4810 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.425595 4810 state_mem.go:36] "Initialized new in-memory state store" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.434180 4810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.437911 4810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.438007 4810 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.438055 4810 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.438210 4810 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.445294 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.445474 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.450727 4810 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.451531 4810 policy_none.go:49] "None policy: Start" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.455039 4810 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.455086 4810 state_mem.go:35] "Initializing new in-memory state store" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.529856 4810 manager.go:334] "Starting Device Plugin manager" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.530290 4810 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.530377 4810 server.go:79] "Starting device plugin registration server" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.531137 4810 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.531200 4810 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.531565 4810 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.531660 4810 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.531670 4810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.539091 4810 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.539244 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.541179 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.541219 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.541237 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.541454 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.541957 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.542021 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.542811 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.542847 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.542862 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.542994 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.543153 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.543217 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.543561 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.543589 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.543602 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.544204 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.544229 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.544241 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.544422 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.544852 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.544881 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.545300 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.545315 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.545350 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.546626 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.546645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.546656 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.546763 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.547186 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.547213 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.547688 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.547705 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.547715 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.554531 4810 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.556890 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="400ms" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.557275 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.557321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.557354 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.557402 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.557471 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.557489 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.557933 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.558010 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.566414 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.566456 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.566485 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.604146 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.604219 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.604269 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.604302 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.604353 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.604459 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.604549 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.604711 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.604821 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.604861 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.604893 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.604926 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.604959 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.605012 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.605091 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.632197 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.634010 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.634083 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.634106 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.634153 4810 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.634985 4810 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.705922 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706009 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706041 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706061 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706112 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706138 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706160 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706181 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706268 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706175 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706321 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706222 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706271 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706295 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706500 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706512 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706313 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706528 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706244 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706549 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706228 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706586 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706616 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706650 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706687 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706722 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706731 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706762 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706844 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.706798 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.835566 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.837443 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.837502 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.837525 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.837565 4810 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.838168 4810 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.890773 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.910523 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.913610 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.924344 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: I0219 15:09:31.930611 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.955540 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-60953f79ae7c81971a2816c3cd4dd34a647f2714a29b03dc7ad988170ddd25f5 WatchSource:0}: Error finding container 60953f79ae7c81971a2816c3cd4dd34a647f2714a29b03dc7ad988170ddd25f5: Status 404 returned error can't find the container with id 60953f79ae7c81971a2816c3cd4dd34a647f2714a29b03dc7ad988170ddd25f5 Feb 19 15:09:31 crc kubenswrapper[4810]: E0219 15:09:31.958451 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="800ms" Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.959050 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b823d0fc166af1a262313705cb5fa80db1ed53c8d99fa8483e2ac061d8d9c6c3 WatchSource:0}: Error finding container b823d0fc166af1a262313705cb5fa80db1ed53c8d99fa8483e2ac061d8d9c6c3: Status 404 returned error can't find the container with id b823d0fc166af1a262313705cb5fa80db1ed53c8d99fa8483e2ac061d8d9c6c3 Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.969728 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a33c7f8ee568780551d955855ad1c8a92f05fcee57b1935c14a9e6b65f643ea7 WatchSource:0}: Error finding container a33c7f8ee568780551d955855ad1c8a92f05fcee57b1935c14a9e6b65f643ea7: Status 404 returned error can't find the container with id a33c7f8ee568780551d955855ad1c8a92f05fcee57b1935c14a9e6b65f643ea7 Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.972741 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5877f22d7454693dfde4ef00e9f4f773e289432bac9886c16f6bd2add9b6645c WatchSource:0}: Error finding container 5877f22d7454693dfde4ef00e9f4f773e289432bac9886c16f6bd2add9b6645c: Status 404 returned error can't find the container with id 5877f22d7454693dfde4ef00e9f4f773e289432bac9886c16f6bd2add9b6645c Feb 19 15:09:31 crc kubenswrapper[4810]: W0219 15:09:31.980102 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e2a59f545a3cf0fbb23985c4ca39575cbe70b9d3b5b0e82bedc6dbb53f12bd50 WatchSource:0}: Error finding container e2a59f545a3cf0fbb23985c4ca39575cbe70b9d3b5b0e82bedc6dbb53f12bd50: Status 404 returned error can't find the container with id e2a59f545a3cf0fbb23985c4ca39575cbe70b9d3b5b0e82bedc6dbb53f12bd50 Feb 19 15:09:32 crc kubenswrapper[4810]: I0219 15:09:32.238745 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:32 crc kubenswrapper[4810]: I0219 15:09:32.241062 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:32 crc kubenswrapper[4810]: I0219 15:09:32.241319 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:32 crc kubenswrapper[4810]: I0219 15:09:32.241367 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:32 crc kubenswrapper[4810]: I0219 15:09:32.241406 4810 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 15:09:32 crc kubenswrapper[4810]: E0219 15:09:32.242068 4810 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Feb 19 15:09:32 crc kubenswrapper[4810]: I0219 15:09:32.346845 4810 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:32 crc kubenswrapper[4810]: I0219 15:09:32.353114 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:11:43.595534524 +0000 UTC Feb 19 15:09:32 crc kubenswrapper[4810]: W0219 15:09:32.358581 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:32 crc kubenswrapper[4810]: E0219 15:09:32.358672 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Feb 19 15:09:32 crc kubenswrapper[4810]: I0219 15:09:32.444488 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5877f22d7454693dfde4ef00e9f4f773e289432bac9886c16f6bd2add9b6645c"} Feb 19 15:09:32 crc kubenswrapper[4810]: I0219 15:09:32.446787 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a33c7f8ee568780551d955855ad1c8a92f05fcee57b1935c14a9e6b65f643ea7"} Feb 19 15:09:32 crc kubenswrapper[4810]: I0219 15:09:32.448058 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b823d0fc166af1a262313705cb5fa80db1ed53c8d99fa8483e2ac061d8d9c6c3"} Feb 19 15:09:32 crc kubenswrapper[4810]: I0219 15:09:32.450634 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"60953f79ae7c81971a2816c3cd4dd34a647f2714a29b03dc7ad988170ddd25f5"} Feb 19 15:09:32 crc kubenswrapper[4810]: I0219 15:09:32.455399 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e2a59f545a3cf0fbb23985c4ca39575cbe70b9d3b5b0e82bedc6dbb53f12bd50"} Feb 19 15:09:32 crc kubenswrapper[4810]: W0219 15:09:32.506243 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:32 crc kubenswrapper[4810]: E0219 15:09:32.506384 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Feb 19 15:09:32 crc kubenswrapper[4810]: W0219 15:09:32.531813 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:32 crc kubenswrapper[4810]: E0219 15:09:32.531911 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Feb 19 15:09:32 crc kubenswrapper[4810]: W0219 15:09:32.703302 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:32 crc kubenswrapper[4810]: E0219 15:09:32.703398 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Feb 19 15:09:32 crc kubenswrapper[4810]: E0219 15:09:32.759534 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="1.6s" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.042547 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.044701 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.044784 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.044807 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.044855 4810 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 15:09:33 crc kubenswrapper[4810]: E0219 15:09:33.046458 4810 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.326788 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 15:09:33 crc kubenswrapper[4810]: E0219 15:09:33.328367 4810 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.348164 4810 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.353257 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:11:58.134973956 +0000 UTC Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.461338 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081"} Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.461389 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1"} Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.461402 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c"} Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.463100 4810 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c" exitCode=0 Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.463166 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c"} Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.463277 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.464781 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.464813 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.464824 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.466132 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f" exitCode=0 Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.466188 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f"} Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.466268 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.467472 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.467497 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.467508 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.468837 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.469648 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.469673 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.469685 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.471014 4810 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="397c9d51eba45e6f957d0be3f588b2d165a8f6256f1e4851f907c333e0c3f682" exitCode=0 Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.471064 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"397c9d51eba45e6f957d0be3f588b2d165a8f6256f1e4851f907c333e0c3f682"} Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.471156 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.473023 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.473084 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.473107 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.475779 4810 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a" exitCode=0 Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.475859 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a"} Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.475882 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.477133 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.477171 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:33 crc kubenswrapper[4810]: I0219 15:09:33.477188 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:34 crc kubenswrapper[4810]: W0219 15:09:34.179509 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:34 crc kubenswrapper[4810]: E0219 15:09:34.179610 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Feb 19 15:09:34 crc kubenswrapper[4810]: W0219 15:09:34.307566 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:34 crc kubenswrapper[4810]: E0219 15:09:34.307764 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.346511 4810 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.354065 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:00:49.353697891 +0000 UTC Feb 19 15:09:34 crc kubenswrapper[4810]: E0219 15:09:34.360529 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="3.2s" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.482225 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"af7cc216f97bd72a2d9a57e54a49676e2828ec7d0a0a035f22261c40a6976a5e"} Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.482307 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.483712 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.483762 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.483778 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.488587 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca"} Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.488611 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.489918 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.489987 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.490011 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.496240 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8"} Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.496375 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3"} Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.496409 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a"} Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.496298 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.497824 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.497858 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.497867 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.500887 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b"} Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.500942 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef"} Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.500974 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e"} Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.501004 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb"} Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.502796 4810 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c824729b1a93ecb22b2e93e1d64ef98f79c50d75cd6aee9630895d67303d7953" exitCode=0 Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.502836 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c824729b1a93ecb22b2e93e1d64ef98f79c50d75cd6aee9630895d67303d7953"} Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.502982 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.504106 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.504137 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.504146 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.646607 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.654221 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.654264 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.654277 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:34 crc kubenswrapper[4810]: I0219 15:09:34.654308 4810 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 15:09:34 crc kubenswrapper[4810]: E0219 15:09:34.655023 4810 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Feb 19 15:09:34 crc kubenswrapper[4810]: W0219 15:09:34.912281 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Feb 19 15:09:34 crc kubenswrapper[4810]: E0219 15:09:34.912428 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.355101 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:08:36.548424542 +0000 UTC Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.509301 4810 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ff2b89fd8f15643fd8228815a695386b60b4635c43b88234f846cd840a791bce" exitCode=0 Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.509439 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ff2b89fd8f15643fd8228815a695386b60b4635c43b88234f846cd840a791bce"} Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.509861 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.511466 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.511533 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.511560 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.518437 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f1b00844d1da0b4c140f9ebcb4acffd7825f60d193021a860831e5fd754b2790"} Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.518571 4810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.518640 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.518640 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.518807 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.519100 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.521284 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.521362 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.521419 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.521437 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.521376 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.521509 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.521372 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.521590 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.521616 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.521991 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.522227 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:35 crc kubenswrapper[4810]: I0219 15:09:35.522434 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.221730 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.356182 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 10:56:33.386142529 +0000 UTC Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.530166 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.530836 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1098bba9b5e7df7106d7f03ae361df7c0d1bb01b84c6a3cbea34d4c7cdd2a3b3"} Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.530896 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9e245e8e90ddec80f114e7a9d912616b52aa1875e68d44bdf537aa602730d340"} Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.530917 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"09d8f1a389e624713c0692bb24385e8a7a657bd8cc24223cbaf6b15949344405"} Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.531007 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.531079 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.533060 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.533114 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.533178 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.533482 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.533539 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:36 crc kubenswrapper[4810]: I0219 15:09:36.533565 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.356938 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:14:40.968983841 +0000 UTC Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.367202 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.530733 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.541352 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"77ecdebbe94adaa82b19c5712ca504262f4a0c229997260f6cd30eecb782ac03"} Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.541438 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f23fccc222edb47509d064f3a1a912a1e11b0f4a6d3819c4dabeab55d9d473f1"} Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.541439 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.541519 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.543681 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.543737 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.543756 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.543769 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.543809 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.543827 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.855195 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.857215 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.857358 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.857398 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:37 crc kubenswrapper[4810]: I0219 15:09:37.857458 4810 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.244072 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.346313 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.358030 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:02:19.626989304 +0000 UTC Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.361375 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.361621 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.363110 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.363145 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.363157 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.545314 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.545351 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.546957 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.547019 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.547034 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.548195 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.548242 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:38 crc kubenswrapper[4810]: I0219 15:09:38.548258 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:39 crc kubenswrapper[4810]: I0219 15:09:39.282739 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 15:09:39 crc kubenswrapper[4810]: I0219 15:09:39.358203 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:41:10.733065256 +0000 UTC Feb 19 15:09:39 crc kubenswrapper[4810]: I0219 15:09:39.548027 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:39 crc kubenswrapper[4810]: I0219 15:09:39.548140 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:39 crc kubenswrapper[4810]: I0219 15:09:39.549001 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:39 crc kubenswrapper[4810]: I0219 15:09:39.549048 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:39 crc kubenswrapper[4810]: I0219 15:09:39.549064 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:39 crc kubenswrapper[4810]: I0219 15:09:39.549481 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:39 crc kubenswrapper[4810]: I0219 15:09:39.549536 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:39 crc kubenswrapper[4810]: I0219 15:09:39.549557 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:40 crc kubenswrapper[4810]: I0219 15:09:40.359271 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:09:43.476250423 +0000 UTC Feb 19 15:09:40 crc kubenswrapper[4810]: I0219 15:09:40.550766 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:40 crc kubenswrapper[4810]: I0219 15:09:40.552392 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:40 crc kubenswrapper[4810]: I0219 15:09:40.552475 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:40 crc kubenswrapper[4810]: I0219 15:09:40.552500 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:41 crc kubenswrapper[4810]: I0219 15:09:41.360119 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:39:05.335070695 +0000 UTC Feb 19 15:09:41 crc kubenswrapper[4810]: E0219 15:09:41.554745 4810 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 15:09:42 crc kubenswrapper[4810]: I0219 15:09:42.146200 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:42 crc kubenswrapper[4810]: I0219 15:09:42.146466 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:42 crc kubenswrapper[4810]: I0219 15:09:42.148266 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:42 crc kubenswrapper[4810]: I0219 15:09:42.148393 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:42 crc kubenswrapper[4810]: I0219 15:09:42.148414 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:42 crc kubenswrapper[4810]: I0219 15:09:42.360845 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:20:34.691335119 +0000 UTC Feb 19 15:09:42 crc kubenswrapper[4810]: I0219 15:09:42.860549 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:42 crc kubenswrapper[4810]: I0219 15:09:42.860739 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:42 crc kubenswrapper[4810]: I0219 15:09:42.862278 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:42 crc kubenswrapper[4810]: I0219 15:09:42.862339 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:42 crc kubenswrapper[4810]: I0219 15:09:42.862351 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:43 crc kubenswrapper[4810]: I0219 15:09:43.361868 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:09:34.869279181 +0000 UTC Feb 19 15:09:43 crc kubenswrapper[4810]: I0219 15:09:43.538636 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:43 crc kubenswrapper[4810]: I0219 15:09:43.545762 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:43 crc kubenswrapper[4810]: I0219 15:09:43.559678 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:43 crc kubenswrapper[4810]: I0219 15:09:43.560993 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:43 crc kubenswrapper[4810]: I0219 15:09:43.561055 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:43 crc kubenswrapper[4810]: I0219 15:09:43.561076 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:43 crc kubenswrapper[4810]: I0219 15:09:43.565409 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:44 crc kubenswrapper[4810]: I0219 15:09:44.362524 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:18:42.796046109 +0000 UTC Feb 19 15:09:44 crc kubenswrapper[4810]: I0219 15:09:44.562598 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:44 crc kubenswrapper[4810]: I0219 15:09:44.564241 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:44 crc kubenswrapper[4810]: I0219 15:09:44.564318 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:44 crc kubenswrapper[4810]: I0219 15:09:44.564435 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.328918 4810 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56004->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.329008 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56004->192.168.126.11:17697: read: connection reset by peer" Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.348013 4810 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.363171 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:40:25.325883947 +0000 UTC Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.567023 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.569347 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f1b00844d1da0b4c140f9ebcb4acffd7825f60d193021a860831e5fd754b2790" exitCode=255 Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.569493 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.569529 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f1b00844d1da0b4c140f9ebcb4acffd7825f60d193021a860831e5fd754b2790"} Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.569711 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.570861 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.570918 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.570929 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.571233 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.571256 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.571264 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.571866 4810 scope.go:117] "RemoveContainer" containerID="f1b00844d1da0b4c140f9ebcb4acffd7825f60d193021a860831e5fd754b2790" Feb 19 15:09:45 crc kubenswrapper[4810]: W0219 15:09:45.717403 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.717480 4810 trace.go:236] Trace[1968846068]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 15:09:35.715) (total time: 10001ms): Feb 19 15:09:45 crc kubenswrapper[4810]: Trace[1968846068]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:09:45.717) Feb 19 15:09:45 crc kubenswrapper[4810]: Trace[1968846068]: [10.001945165s] [10.001945165s] END Feb 19 15:09:45 crc kubenswrapper[4810]: E0219 15:09:45.717502 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.861070 4810 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 15:09:45 crc kubenswrapper[4810]: I0219 15:09:45.861182 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 15:09:46 crc kubenswrapper[4810]: I0219 15:09:46.364467 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:28:45.449930866 +0000 UTC Feb 19 15:09:46 crc kubenswrapper[4810]: I0219 15:09:46.464135 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:46 crc kubenswrapper[4810]: I0219 15:09:46.498696 4810 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 15:09:46 crc kubenswrapper[4810]: I0219 15:09:46.498805 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 15:09:46 crc kubenswrapper[4810]: I0219 15:09:46.506350 4810 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 15:09:46 crc kubenswrapper[4810]: I0219 15:09:46.506642 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 15:09:46 crc kubenswrapper[4810]: I0219 15:09:46.576263 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 15:09:46 crc kubenswrapper[4810]: I0219 15:09:46.578311 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65"} Feb 19 15:09:46 crc kubenswrapper[4810]: I0219 15:09:46.578466 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:46 crc kubenswrapper[4810]: I0219 15:09:46.579413 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:46 crc kubenswrapper[4810]: I0219 15:09:46.579573 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:46 crc kubenswrapper[4810]: I0219 15:09:46.579651 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:47 crc kubenswrapper[4810]: I0219 15:09:47.365814 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:54:04.513238034 +0000 UTC Feb 19 15:09:47 crc kubenswrapper[4810]: I0219 15:09:47.535540 4810 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]log ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]etcd ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/priority-and-fairness-filter ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/start-apiextensions-informers ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/start-apiextensions-controllers ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/crd-informer-synced ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/start-system-namespaces-controller ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 19 15:09:47 crc kubenswrapper[4810]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/bootstrap-controller ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/start-kube-aggregator-informers ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/apiservice-registration-controller ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/apiservice-discovery-controller ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]autoregister-completion ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/apiservice-openapi-controller ok Feb 19 15:09:47 crc kubenswrapper[4810]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 19 15:09:47 crc kubenswrapper[4810]: livez check failed Feb 19 15:09:47 crc kubenswrapper[4810]: I0219 15:09:47.538298 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 15:09:47 crc kubenswrapper[4810]: I0219 15:09:47.580968 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:47 crc kubenswrapper[4810]: I0219 15:09:47.581071 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:47 crc kubenswrapper[4810]: I0219 15:09:47.582116 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:47 crc kubenswrapper[4810]: I0219 15:09:47.582309 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:47 crc kubenswrapper[4810]: I0219 15:09:47.582474 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:48 crc kubenswrapper[4810]: I0219 15:09:48.367099 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:48:28.136455972 +0000 UTC Feb 19 15:09:48 crc kubenswrapper[4810]: I0219 15:09:48.589409 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:48 crc kubenswrapper[4810]: I0219 15:09:48.590925 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:48 crc kubenswrapper[4810]: I0219 15:09:48.590964 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:48 crc kubenswrapper[4810]: I0219 15:09:48.590976 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:49 crc kubenswrapper[4810]: I0219 15:09:49.320719 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 15:09:49 crc kubenswrapper[4810]: I0219 15:09:49.320954 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:49 crc kubenswrapper[4810]: I0219 15:09:49.322609 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:49 crc kubenswrapper[4810]: I0219 15:09:49.322673 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:49 crc kubenswrapper[4810]: I0219 15:09:49.322691 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:49 crc kubenswrapper[4810]: I0219 15:09:49.336371 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 15:09:49 crc kubenswrapper[4810]: I0219 15:09:49.367845 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 15:04:15.430721913 +0000 UTC Feb 19 15:09:49 crc kubenswrapper[4810]: I0219 15:09:49.591924 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:49 crc kubenswrapper[4810]: I0219 15:09:49.593178 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:49 crc kubenswrapper[4810]: I0219 15:09:49.593264 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:49 crc kubenswrapper[4810]: I0219 15:09:49.593283 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:49 crc kubenswrapper[4810]: I0219 15:09:49.737071 4810 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.343304 4810 apiserver.go:52] "Watching apiserver" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.349912 4810 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.350654 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.351257 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.351439 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.351658 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:09:50 crc kubenswrapper[4810]: E0219 15:09:50.351756 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:09:50 crc kubenswrapper[4810]: E0219 15:09:50.351775 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.352048 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.352914 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.352946 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 15:09:50 crc kubenswrapper[4810]: E0219 15:09:50.352999 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.355541 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.355541 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.356846 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.356940 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.357104 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.357317 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.357671 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.357884 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.358070 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.368676 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 06:49:27.434453389 +0000 UTC Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.390628 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.408425 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.423906 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.444409 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.451078 4810 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.457470 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.472307 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:50 crc kubenswrapper[4810]: I0219 15:09:50.484653 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.369247 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:25:15.070750763 +0000 UTC Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.438772 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.438794 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.438929 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.439264 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.457705 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.475642 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.491307 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.493351 4810 trace.go:236] Trace[44012123]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 15:09:39.444) (total time: 12049ms): Feb 19 15:09:51 crc kubenswrapper[4810]: Trace[44012123]: ---"Objects listed" error: 12049ms (15:09:51.493) Feb 19 15:09:51 crc kubenswrapper[4810]: Trace[44012123]: [12.049199069s] [12.049199069s] END Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.493372 4810 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.494102 4810 trace.go:236] Trace[1393407943]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 15:09:39.807) (total time: 11686ms): Feb 19 15:09:51 crc kubenswrapper[4810]: Trace[1393407943]: ---"Objects listed" error: 11686ms (15:09:51.494) Feb 19 15:09:51 crc kubenswrapper[4810]: Trace[1393407943]: [11.686360415s] [11.686360415s] END Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.494120 4810 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.494459 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.496137 4810 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.496289 4810 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.497363 4810 trace.go:236] Trace[1148226231]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 15:09:38.918) (total time: 12578ms): Feb 19 15:09:51 crc kubenswrapper[4810]: Trace[1148226231]: ---"Objects listed" error: 12578ms (15:09:51.497) Feb 19 15:09:51 crc kubenswrapper[4810]: Trace[1148226231]: [12.578774292s] [12.578774292s] END Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.497645 4810 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.509181 4810 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.512574 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.530871 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.550042 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.596778 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.596848 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.596879 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.596909 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.596948 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.596975 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.597040 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.597963 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.597968 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.598055 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.598116 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.598374 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.598838 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.598877 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.598929 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.598948 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.598979 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.599021 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.599043 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.599109 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.599717 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.599766 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.599810 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.599891 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.600286 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.600345 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.600394 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.600420 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.600415 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.600504 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.600546 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.600756 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.600797 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.600878 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.601120 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.601243 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.601309 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.601641 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.601750 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.602769 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.602883 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.604267 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.604423 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.604539 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.604612 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.604665 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.604983 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.605296 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.605394 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.605521 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.605759 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.605953 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.605989 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.606134 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:09:52.106106584 +0000 UTC m=+21.588136718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.606167 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.606213 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.606312 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.606628 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.606704 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.606731 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.607120 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.607065 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.607254 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.607600 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.607279 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.607738 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.607880 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.607885 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.607913 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.608085 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.608159 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.608187 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.608817 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.608896 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.608737 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.608788 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.608993 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.609237 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.609398 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.609482 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.609728 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.609779 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.609844 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.610288 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.614796 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.614876 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.614914 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.614942 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.614976 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615000 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615024 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615048 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615072 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615096 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615121 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615154 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615177 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615201 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615235 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615276 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615309 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615371 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615402 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615426 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615456 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615493 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615542 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615570 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615605 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615629 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615656 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615685 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615707 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615729 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615753 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615775 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615800 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615821 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615842 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615870 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615891 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615913 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615945 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615968 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.615991 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616016 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616043 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616066 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616089 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616112 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616135 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616158 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616182 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616205 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616228 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616254 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616277 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616298 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616320 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616363 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616386 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616408 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616432 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616496 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616519 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616541 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616574 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616608 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616640 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616671 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616692 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616714 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616736 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616779 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616803 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616827 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616854 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616878 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616902 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616925 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616947 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616969 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.616994 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.617024 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.617048 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.617070 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.617106 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.617130 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.617154 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.617176 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.617199 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.617223 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.617248 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.617270 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.617292 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.617314 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620339 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620387 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620418 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620454 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620483 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620510 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620537 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620563 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620595 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620624 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620649 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620674 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620705 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620730 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620754 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620779 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620806 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620832 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620854 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620880 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620906 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620932 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620959 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620989 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621014 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621038 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621059 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621094 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621117 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621141 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621164 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621189 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621213 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621240 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621264 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621290 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621316 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621372 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621395 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621420 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621446 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621498 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621522 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621545 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621570 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621599 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621624 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621650 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621698 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621723 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621748 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621772 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621798 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621821 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621844 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621867 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621892 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621919 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621945 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621971 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621996 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622020 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622075 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622107 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622138 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622168 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622194 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622221 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622252 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622281 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622305 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622402 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622427 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622455 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622481 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622505 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622586 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622603 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622619 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622632 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622647 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622662 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622676 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622690 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622704 4810 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622718 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622732 4810 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622745 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622758 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622772 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622786 4810 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622801 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622814 4810 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622828 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622842 4810 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622855 4810 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622868 4810 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622881 4810 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622894 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622909 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622922 4810 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622937 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622950 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622964 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622978 4810 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622991 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.623005 4810 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.623020 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.623040 4810 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.623054 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.623068 4810 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.623082 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.623096 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.623109 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.623124 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.623137 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.623157 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.623171 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.618587 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.637164 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.618600 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.618908 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.618985 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.619029 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.619064 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.619258 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.619448 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.619635 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.619681 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.619670 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.619773 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.619726 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.619904 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620017 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620026 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620465 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.620880 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621140 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621219 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621237 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621582 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621862 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.621887 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.622177 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.623876 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.624646 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.625873 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.625955 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.625299 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.626207 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.626235 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.626435 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.626488 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.626590 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.626799 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.626884 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.627052 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.627435 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.627529 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.628039 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.628413 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.628670 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.628712 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.628896 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.628959 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.629158 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.629208 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.629464 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.629972 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.630665 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.630933 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.630928 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.632674 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.632891 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.632918 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.633269 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.633632 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.633865 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.634066 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.634187 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.634278 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.634510 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.635386 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.635982 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.636097 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.636255 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.636490 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.636493 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.636881 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.637800 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.637095 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.637498 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.637625 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.637735 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.637908 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.638033 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.638118 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.638459 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.638735 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.639268 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.639298 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.639335 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.639007 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.639478 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:52.139417892 +0000 UTC m=+21.621448006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.639652 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.639906 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.640113 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.640276 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.640412 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.640438 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.640459 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.640584 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:52.14056075 +0000 UTC m=+21.622590894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.640663 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.640789 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.641585 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.641781 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.642449 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.643880 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.643947 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.644305 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.644409 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.644768 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:52.144666852 +0000 UTC m=+21.626697156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.644883 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.645826 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.646182 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.637125 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.648059 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.648108 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.649123 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.649522 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.650014 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.655955 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.650650 4810 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.656341 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.656419 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.656663 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.656948 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.660360 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.660493 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.660728 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.660799 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.660871 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.657124 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.661214 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.661429 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.661458 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.661493 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.662169 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.662520 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.662801 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.663068 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: E0219 15:09:51.663214 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:52.163169332 +0000 UTC m=+21.645199466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.664138 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.664363 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.664438 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.664663 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.664730 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.665043 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.665568 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.666064 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.666124 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.666122 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.667154 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.667950 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.667964 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.668045 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.668667 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.668764 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.668815 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.669060 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.669128 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.669081 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.669232 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.669284 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.669551 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.672626 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.675723 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.675961 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.675762 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.675886 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.675892 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.676149 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.676260 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.677815 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.679407 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.679604 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.680466 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.684102 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.685981 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.696189 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.707539 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724419 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724473 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724533 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724546 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724555 4810 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724564 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724573 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724581 4810 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724590 4810 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724600 4810 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724610 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724619 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724628 4810 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724636 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724644 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724653 4810 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724661 4810 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724670 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724678 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724687 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724695 4810 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724703 4810 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724712 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724722 4810 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724731 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724740 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724749 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724758 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724766 4810 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724775 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724779 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724783 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724807 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724816 4810 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724827 4810 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724837 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724845 4810 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724854 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724863 4810 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724871 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724880 4810 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724890 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724900 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724910 4810 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724919 4810 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724928 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724937 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724946 4810 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724955 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724963 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724971 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724980 4810 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724988 4810 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.724997 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725006 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725015 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725023 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725031 4810 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725040 4810 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725048 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725056 4810 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725066 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725074 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725083 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725096 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725104 4810 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725113 4810 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725122 4810 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725131 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725139 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725148 4810 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725157 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725176 4810 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725186 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725195 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725203 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725211 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725220 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725228 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725236 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725245 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725254 4810 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725262 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725273 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725281 4810 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725289 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725297 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725305 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725313 4810 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725346 4810 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725356 4810 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725364 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725372 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725380 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725389 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725398 4810 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725408 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725416 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725424 4810 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725433 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725441 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725450 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725457 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725465 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725473 4810 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725482 4810 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725494 4810 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725502 4810 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725510 4810 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725518 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725526 4810 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725535 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725542 4810 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725550 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725558 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725567 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725574 4810 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725583 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725591 4810 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725600 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725608 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725616 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725624 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725632 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725641 4810 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725649 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725658 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725670 4810 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725681 4810 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725689 4810 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725698 4810 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725707 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725714 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725724 4810 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725732 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725739 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725748 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725756 4810 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725766 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725774 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725783 4810 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725792 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725800 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725809 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725817 4810 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725825 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725834 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725843 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725851 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725860 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725869 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725877 4810 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725885 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725893 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725902 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.725911 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.872217 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.884709 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 15:09:51 crc kubenswrapper[4810]: I0219 15:09:51.890450 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 15:09:51 crc kubenswrapper[4810]: W0219 15:09:51.908573 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-1be1bd45f6ca011de1f6f387ecb73eb3ec669d743b57fc58aadfa5e2fbfbed90 WatchSource:0}: Error finding container 1be1bd45f6ca011de1f6f387ecb73eb3ec669d743b57fc58aadfa5e2fbfbed90: Status 404 returned error can't find the container with id 1be1bd45f6ca011de1f6f387ecb73eb3ec669d743b57fc58aadfa5e2fbfbed90 Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.131548 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.131793 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:09:53.131750258 +0000 UTC m=+22.613780382 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.233005 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.233055 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.233077 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.233097 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.233207 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.233262 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:53.23324536 +0000 UTC m=+22.715275484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.233655 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.233668 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.233679 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.233702 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:53.233694811 +0000 UTC m=+22.715724935 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.233758 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.233767 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.233774 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.233791 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:53.233785974 +0000 UTC m=+22.715816098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.233816 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.233835 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:53.233828875 +0000 UTC m=+22.715858989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.370819 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:42:09.014127784 +0000 UTC Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.438619 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.438802 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.535702 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.540963 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.547469 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.548139 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.560052 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.587712 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.602640 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.603231 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.605076 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65" exitCode=255 Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.605141 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65"} Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.605226 4810 scope.go:117] "RemoveContainer" containerID="f1b00844d1da0b4c140f9ebcb4acffd7825f60d193021a860831e5fd754b2790" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.607130 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4fcbafc57eac3fa9bee102fb74894c7065c4619a5ee2264d1274592f2301e55d"} Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.609848 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1"} Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.609882 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77"} Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.609897 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1be1bd45f6ca011de1f6f387ecb73eb3ec669d743b57fc58aadfa5e2fbfbed90"} Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.611346 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b"} Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.611396 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"163227e588ccab22aa17179b63f314531c5f3e6219dab5243c4fddde2ef2e86f"} Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.616066 4810 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.616256 4810 scope.go:117] "RemoveContainer" containerID="640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65" Feb 19 15:09:52 crc kubenswrapper[4810]: E0219 15:09:52.616417 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.617868 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.635965 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.637610 4810 csr.go:261] certificate signing request csr-q6c86 is approved, waiting to be issued Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.652836 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.653523 4810 csr.go:257] certificate signing request csr-q6c86 is issued Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.667568 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.679142 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.697108 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.710445 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.725640 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.737699 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b00844d1da0b4c140f9ebcb4acffd7825f60d193021a860831e5fd754b2790\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:45Z\\\",\\\"message\\\":\\\"W0219 15:09:34.723476 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 15:09:34.724418 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771513774 cert, and key in /tmp/serving-cert-2495124582/serving-signer.crt, /tmp/serving-cert-2495124582/serving-signer.key\\\\nI0219 15:09:35.062079 1 observer_polling.go:159] Starting file observer\\\\nW0219 15:09:35.062187 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 15:09:35.062309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:35.064632 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2495124582/tls.crt::/tmp/serving-cert-2495124582/tls.key\\\\\\\"\\\\nF0219 15:09:45.321489 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.756226 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.864902 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.871365 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.879383 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.901388 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.917086 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.921206 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.956204 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b00844d1da0b4c140f9ebcb4acffd7825f60d193021a860831e5fd754b2790\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:45Z\\\",\\\"message\\\":\\\"W0219 15:09:34.723476 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 15:09:34.724418 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771513774 cert, and key in /tmp/serving-cert-2495124582/serving-signer.crt, /tmp/serving-cert-2495124582/serving-signer.key\\\\nI0219 15:09:35.062079 1 observer_polling.go:159] Starting file observer\\\\nW0219 15:09:35.062187 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 15:09:35.062309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:35.064632 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2495124582/tls.crt::/tmp/serving-cert-2495124582/tls.key\\\\\\\"\\\\nF0219 15:09:45.321489 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:52 crc kubenswrapper[4810]: I0219 15:09:52.984003 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.028295 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.077353 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.104761 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.116785 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.129365 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.142683 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.142952 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:09:55.142909947 +0000 UTC m=+24.624940101 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.144066 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b00844d1da0b4c140f9ebcb4acffd7825f60d193021a860831e5fd754b2790\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:45Z\\\",\\\"message\\\":\\\"W0219 15:09:34.723476 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 15:09:34.724418 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771513774 cert, and key in /tmp/serving-cert-2495124582/serving-signer.crt, /tmp/serving-cert-2495124582/serving-signer.key\\\\nI0219 15:09:35.062079 1 observer_polling.go:159] Starting file observer\\\\nW0219 15:09:35.062187 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 15:09:35.062309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:35.064632 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2495124582/tls.crt::/tmp/serving-cert-2495124582/tls.key\\\\\\\"\\\\nF0219 15:09:45.321489 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.156688 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-flbx5"] Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.157082 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-bsztz"] Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.157245 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-flbx5" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.157376 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.160386 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.160414 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.160515 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.160706 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.161485 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.166712 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.166987 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.166988 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.171063 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.183728 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.195417 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.207632 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.222822 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1b00844d1da0b4c140f9ebcb4acffd7825f60d193021a860831e5fd754b2790\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:45Z\\\",\\\"message\\\":\\\"W0219 15:09:34.723476 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 15:09:34.724418 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771513774 cert, and key in /tmp/serving-cert-2495124582/serving-signer.crt, /tmp/serving-cert-2495124582/serving-signer.key\\\\nI0219 15:09:35.062079 1 observer_polling.go:159] Starting file observer\\\\nW0219 15:09:35.062187 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 15:09:35.062309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:35.064632 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2495124582/tls.crt::/tmp/serving-cert-2495124582/tls.key\\\\\\\"\\\\nF0219 15:09:45.321489 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.239088 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243353 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-multus-conf-dir\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243407 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-etc-kubernetes\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243450 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243482 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-multus-socket-dir-parent\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243507 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-multus-cni-dir\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243534 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-os-release\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243558 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-var-lib-cni-multus\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243588 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-var-lib-kubelet\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243709 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243799 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4cc3ea69-b881-4fd4-ad4d-42803f27865b-hosts-file\") pod \"node-resolver-flbx5\" (UID: \"4cc3ea69-b881-4fd4-ad4d-42803f27865b\") " pod="openshift-dns/node-resolver-flbx5" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243829 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqph9\" (UniqueName: \"kubernetes.io/projected/4cc3ea69-b881-4fd4-ad4d-42803f27865b-kube-api-access-gqph9\") pod \"node-resolver-flbx5\" (UID: \"4cc3ea69-b881-4fd4-ad4d-42803f27865b\") " pod="openshift-dns/node-resolver-flbx5" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243851 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2a45a199-beeb-4972-b796-15c958fe99d3-cni-binary-copy\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243872 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-var-lib-cni-bin\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243893 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-hostroot\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.243948 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.243989 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.244002 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.243968 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.244066 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:55.24404461 +0000 UTC m=+24.726074734 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.244091 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.244120 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.244133 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.244092 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-cnibin\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.244190 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:55.244168163 +0000 UTC m=+24.726198347 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.244216 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-run-k8s-cni-cncf-io\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.244241 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2a45a199-beeb-4972-b796-15c958fe99d3-multus-daemon-config\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.244270 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-system-cni-dir\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.244294 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcg8h\" (UniqueName: \"kubernetes.io/projected/2a45a199-beeb-4972-b796-15c958fe99d3-kube-api-access-pcg8h\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.244318 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-run-netns\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.244358 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-run-multus-certs\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.244397 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.244477 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.244514 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:55.244504361 +0000 UTC m=+24.726534525 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.244597 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.244653 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:55.244641895 +0000 UTC m=+24.726672219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.258679 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.275059 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.291262 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.304263 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.322990 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.334055 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.345611 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-system-cni-dir\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.345742 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcg8h\" (UniqueName: \"kubernetes.io/projected/2a45a199-beeb-4972-b796-15c958fe99d3-kube-api-access-pcg8h\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.345824 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-run-netns\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.345865 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-system-cni-dir\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.345893 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-run-multus-certs\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346036 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-etc-kubernetes\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346094 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-multus-socket-dir-parent\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346124 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-multus-conf-dir\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346147 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-multus-cni-dir\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346182 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-os-release\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346257 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-var-lib-cni-multus\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346282 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-var-lib-kubelet\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346371 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4cc3ea69-b881-4fd4-ad4d-42803f27865b-hosts-file\") pod \"node-resolver-flbx5\" (UID: \"4cc3ea69-b881-4fd4-ad4d-42803f27865b\") " pod="openshift-dns/node-resolver-flbx5" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346408 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqph9\" (UniqueName: \"kubernetes.io/projected/4cc3ea69-b881-4fd4-ad4d-42803f27865b-kube-api-access-gqph9\") pod \"node-resolver-flbx5\" (UID: \"4cc3ea69-b881-4fd4-ad4d-42803f27865b\") " pod="openshift-dns/node-resolver-flbx5" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346454 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2a45a199-beeb-4972-b796-15c958fe99d3-cni-binary-copy\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346477 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-hostroot\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346517 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-cnibin\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346544 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-run-k8s-cni-cncf-io\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346568 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-var-lib-cni-bin\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346591 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2a45a199-beeb-4972-b796-15c958fe99d3-multus-daemon-config\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.346422 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-run-multus-certs\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.347013 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-run-netns\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.347148 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-var-lib-kubelet\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.347253 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-etc-kubernetes\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.347367 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-multus-socket-dir-parent\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.347452 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-multus-conf-dir\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.347751 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-multus-cni-dir\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.347507 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4cc3ea69-b881-4fd4-ad4d-42803f27865b-hosts-file\") pod \"node-resolver-flbx5\" (UID: \"4cc3ea69-b881-4fd4-ad4d-42803f27865b\") " pod="openshift-dns/node-resolver-flbx5" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.348046 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2a45a199-beeb-4972-b796-15c958fe99d3-cni-binary-copy\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.348111 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-hostroot\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.348145 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-var-lib-cni-multus\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.348159 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-cnibin\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.348183 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-run-k8s-cni-cncf-io\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.348257 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-host-var-lib-cni-bin\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.347463 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2a45a199-beeb-4972-b796-15c958fe99d3-multus-daemon-config\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.348425 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2a45a199-beeb-4972-b796-15c958fe99d3-os-release\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.348722 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.366046 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcg8h\" (UniqueName: \"kubernetes.io/projected/2a45a199-beeb-4972-b796-15c958fe99d3-kube-api-access-pcg8h\") pod \"multus-bsztz\" (UID: \"2a45a199-beeb-4972-b796-15c958fe99d3\") " pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.366672 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.371498 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 09:10:53.975287492 +0000 UTC Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.374158 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqph9\" (UniqueName: \"kubernetes.io/projected/4cc3ea69-b881-4fd4-ad4d-42803f27865b-kube-api-access-gqph9\") pod \"node-resolver-flbx5\" (UID: \"4cc3ea69-b881-4fd4-ad4d-42803f27865b\") " pod="openshift-dns/node-resolver-flbx5" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.439738 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.439955 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.439760 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.440492 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.445765 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.446297 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.447638 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.451623 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.473449 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-flbx5" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.478722 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bsztz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.487579 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: W0219 15:09:53.508150 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a45a199_beeb_4972_b796_15c958fe99d3.slice/crio-384873e32a6b0156755a3eec1f2afe540afe5a90b1090f7abd9efa003b900161 WatchSource:0}: Error finding container 384873e32a6b0156755a3eec1f2afe540afe5a90b1090f7abd9efa003b900161: Status 404 returned error can't find the container with id 384873e32a6b0156755a3eec1f2afe540afe5a90b1090f7abd9efa003b900161 Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.520444 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.521286 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.522444 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.523120 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.524110 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.524666 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.525638 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.526172 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.529785 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.556902 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.557565 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.600601 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.601320 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.601941 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.603029 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.603560 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.604563 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.605041 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.606100 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.606581 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.607197 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.609199 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.609706 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.610735 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.611258 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.612703 4810 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.612816 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.614514 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.616978 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.619475 4810 scope.go:117] "RemoveContainer" containerID="640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65" Feb 19 15:09:53 crc kubenswrapper[4810]: E0219 15:09:53.619639 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.636002 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.639757 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.640711 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.643241 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.648798 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.650133 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.651102 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.651984 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.653496 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.654446 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.654498 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.654731 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 15:04:52 +0000 UTC, rotation deadline is 2026-12-26 04:10:58.342860789 +0000 UTC Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.654784 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7429h1m4.688079321s for next certificate rotation Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.655513 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.656344 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.657438 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.658049 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.659236 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.660469 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.662019 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.665697 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.666179 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.666988 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.668569 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.669174 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.670190 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bsztz" event={"ID":"2a45a199-beeb-4972-b796-15c958fe99d3","Type":"ContainerStarted","Data":"384873e32a6b0156755a3eec1f2afe540afe5a90b1090f7abd9efa003b900161"} Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.670223 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-flbx5" event={"ID":"4cc3ea69-b881-4fd4-ad4d-42803f27865b","Type":"ContainerStarted","Data":"8009e6833f60100903c37a52f0f744b03f602571cf467798dc90511180417ded"} Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.670240 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-nmbsx"] Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.673135 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.676472 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-t499d"] Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.676843 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.677547 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.681085 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.681369 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.681584 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.681739 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.681894 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.682273 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.682439 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.691455 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.715056 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.731804 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.754365 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/247e948b-3c17-4675-bd1c-f894b02d2817-cni-binary-copy\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.754275 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.754543 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/247e948b-3c17-4675-bd1c-f894b02d2817-cnibin\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.754694 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dfbf74ef-1e94-4826-8583-42b2e246ccf3-rootfs\") pod \"machine-config-daemon-t499d\" (UID: \"dfbf74ef-1e94-4826-8583-42b2e246ccf3\") " pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.754983 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/247e948b-3c17-4675-bd1c-f894b02d2817-os-release\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.755058 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmwlb\" (UniqueName: \"kubernetes.io/projected/247e948b-3c17-4675-bd1c-f894b02d2817-kube-api-access-qmwlb\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.755101 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfbf74ef-1e94-4826-8583-42b2e246ccf3-mcd-auth-proxy-config\") pod \"machine-config-daemon-t499d\" (UID: \"dfbf74ef-1e94-4826-8583-42b2e246ccf3\") " pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.755155 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfbf74ef-1e94-4826-8583-42b2e246ccf3-proxy-tls\") pod \"machine-config-daemon-t499d\" (UID: \"dfbf74ef-1e94-4826-8583-42b2e246ccf3\") " pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.755247 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/247e948b-3c17-4675-bd1c-f894b02d2817-system-cni-dir\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.755280 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/247e948b-3c17-4675-bd1c-f894b02d2817-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.755315 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/247e948b-3c17-4675-bd1c-f894b02d2817-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.755384 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjrkt\" (UniqueName: \"kubernetes.io/projected/dfbf74ef-1e94-4826-8583-42b2e246ccf3-kube-api-access-kjrkt\") pod \"machine-config-daemon-t499d\" (UID: \"dfbf74ef-1e94-4826-8583-42b2e246ccf3\") " pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.772697 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.789048 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.806403 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.825297 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.844639 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856427 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dfbf74ef-1e94-4826-8583-42b2e246ccf3-rootfs\") pod \"machine-config-daemon-t499d\" (UID: \"dfbf74ef-1e94-4826-8583-42b2e246ccf3\") " pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856497 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/247e948b-3c17-4675-bd1c-f894b02d2817-os-release\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856520 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmwlb\" (UniqueName: \"kubernetes.io/projected/247e948b-3c17-4675-bd1c-f894b02d2817-kube-api-access-qmwlb\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856544 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfbf74ef-1e94-4826-8583-42b2e246ccf3-mcd-auth-proxy-config\") pod \"machine-config-daemon-t499d\" (UID: \"dfbf74ef-1e94-4826-8583-42b2e246ccf3\") " pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856579 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfbf74ef-1e94-4826-8583-42b2e246ccf3-proxy-tls\") pod \"machine-config-daemon-t499d\" (UID: \"dfbf74ef-1e94-4826-8583-42b2e246ccf3\") " pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856608 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/247e948b-3c17-4675-bd1c-f894b02d2817-system-cni-dir\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856601 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dfbf74ef-1e94-4826-8583-42b2e246ccf3-rootfs\") pod \"machine-config-daemon-t499d\" (UID: \"dfbf74ef-1e94-4826-8583-42b2e246ccf3\") " pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856625 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/247e948b-3c17-4675-bd1c-f894b02d2817-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856717 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/247e948b-3c17-4675-bd1c-f894b02d2817-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856740 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjrkt\" (UniqueName: \"kubernetes.io/projected/dfbf74ef-1e94-4826-8583-42b2e246ccf3-kube-api-access-kjrkt\") pod \"machine-config-daemon-t499d\" (UID: \"dfbf74ef-1e94-4826-8583-42b2e246ccf3\") " pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856757 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/247e948b-3c17-4675-bd1c-f894b02d2817-os-release\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856804 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/247e948b-3c17-4675-bd1c-f894b02d2817-cni-binary-copy\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856829 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/247e948b-3c17-4675-bd1c-f894b02d2817-cnibin\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856872 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/247e948b-3c17-4675-bd1c-f894b02d2817-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.856891 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/247e948b-3c17-4675-bd1c-f894b02d2817-cnibin\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.857652 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfbf74ef-1e94-4826-8583-42b2e246ccf3-mcd-auth-proxy-config\") pod \"machine-config-daemon-t499d\" (UID: \"dfbf74ef-1e94-4826-8583-42b2e246ccf3\") " pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.857717 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/247e948b-3c17-4675-bd1c-f894b02d2817-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.857828 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/247e948b-3c17-4675-bd1c-f894b02d2817-system-cni-dir\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.857899 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/247e948b-3c17-4675-bd1c-f894b02d2817-cni-binary-copy\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.861994 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfbf74ef-1e94-4826-8583-42b2e246ccf3-proxy-tls\") pod \"machine-config-daemon-t499d\" (UID: \"dfbf74ef-1e94-4826-8583-42b2e246ccf3\") " pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.862948 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.875728 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjrkt\" (UniqueName: \"kubernetes.io/projected/dfbf74ef-1e94-4826-8583-42b2e246ccf3-kube-api-access-kjrkt\") pod \"machine-config-daemon-t499d\" (UID: \"dfbf74ef-1e94-4826-8583-42b2e246ccf3\") " pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.878736 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmwlb\" (UniqueName: \"kubernetes.io/projected/247e948b-3c17-4675-bd1c-f894b02d2817-kube-api-access-qmwlb\") pod \"multus-additional-cni-plugins-nmbsx\" (UID: \"247e948b-3c17-4675-bd1c-f894b02d2817\") " pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.881116 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.898246 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.915127 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.924701 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8k7p5"] Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.925723 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.927349 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.927635 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.927895 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.928047 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.928567 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.928875 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.929112 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.932299 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.946122 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.958785 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.974380 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:53 crc kubenswrapper[4810]: I0219 15:09:53.987600 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.000759 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.000932 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: W0219 15:09:54.013176 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfbf74ef_1e94_4826_8583_42b2e246ccf3.slice/crio-a970a79f27d05e1726f430916427e3c1c23747211a4c62f3528a1df835ec6fdd WatchSource:0}: Error finding container a970a79f27d05e1726f430916427e3c1c23747211a4c62f3528a1df835ec6fdd: Status 404 returned error can't find the container with id a970a79f27d05e1726f430916427e3c1c23747211a4c62f3528a1df835ec6fdd Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.021021 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.026411 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: W0219 15:09:54.037472 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod247e948b_3c17_4675_bd1c_f894b02d2817.slice/crio-f4ae6f249d84406b535022e86552a2e5916555094ead428fd4bc4d3f7263983b WatchSource:0}: Error finding container f4ae6f249d84406b535022e86552a2e5916555094ead428fd4bc4d3f7263983b: Status 404 returned error can't find the container with id f4ae6f249d84406b535022e86552a2e5916555094ead428fd4bc4d3f7263983b Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.058882 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.058927 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-systemd\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.058946 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-env-overrides\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.058971 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-ovn\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.058988 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovnkube-script-lib\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059130 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-var-lib-openvswitch\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059183 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-etc-openvswitch\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059202 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-log-socket\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059219 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7xqv\" (UniqueName: \"kubernetes.io/projected/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-kube-api-access-v7xqv\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059299 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-kubelet\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059319 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-node-log\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059354 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovnkube-config\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059371 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovn-node-metrics-cert\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059387 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-openvswitch\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059404 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-systemd-units\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059427 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-cni-bin\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059444 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-run-ovn-kubernetes\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059458 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-cni-netd\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059473 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-slash\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059488 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-run-netns\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.059511 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.114295 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.154767 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.159835 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-etc-openvswitch\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.159873 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-log-socket\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.159894 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7xqv\" (UniqueName: \"kubernetes.io/projected/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-kube-api-access-v7xqv\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.159924 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-kubelet\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.159938 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-node-log\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.159958 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovnkube-config\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.159972 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovn-node-metrics-cert\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.159988 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-openvswitch\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.159992 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-etc-openvswitch\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160012 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-systemd-units\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160057 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-systemd-units\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160111 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-cni-bin\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160152 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-run-ovn-kubernetes\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160176 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-cni-netd\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160178 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-node-log\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160255 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-cni-bin\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160232 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-slash\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160201 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-slash\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160304 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-run-ovn-kubernetes\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160351 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-run-netns\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160369 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-kubelet\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160402 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160418 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-openvswitch\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160387 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-run-netns\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160358 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-cni-netd\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160448 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160500 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-systemd\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160521 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-env-overrides\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160553 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-ovn\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160553 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-systemd\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160578 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovnkube-script-lib\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160604 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-var-lib-openvswitch\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160645 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-ovn\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160671 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-var-lib-openvswitch\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160721 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-log-socket\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.160939 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovnkube-config\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.161185 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-env-overrides\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.161652 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovnkube-script-lib\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.164426 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovn-node-metrics-cert\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.179933 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.189199 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7xqv\" (UniqueName: \"kubernetes.io/projected/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-kube-api-access-v7xqv\") pod \"ovnkube-node-8k7p5\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.203218 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.223703 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.235504 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.241664 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:09:54 crc kubenswrapper[4810]: W0219 15:09:54.254618 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5a8a15c_53e8_4868_8feb_dcd4e83939a4.slice/crio-18f9a8e62a518ac5ed0415309aa8b6acc10ac0aef8f801aae58cbe85d9127027 WatchSource:0}: Error finding container 18f9a8e62a518ac5ed0415309aa8b6acc10ac0aef8f801aae58cbe85d9127027: Status 404 returned error can't find the container with id 18f9a8e62a518ac5ed0415309aa8b6acc10ac0aef8f801aae58cbe85d9127027 Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.256856 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.274073 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.291677 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.306378 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.330769 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.373011 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:16:45.592358663 +0000 UTC Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.439175 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:09:54 crc kubenswrapper[4810]: E0219 15:09:54.439303 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.622450 4810 generic.go:334] "Generic (PLEG): container finished" podID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerID="3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c" exitCode=0 Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.622657 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c"} Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.622793 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerStarted","Data":"18f9a8e62a518ac5ed0415309aa8b6acc10ac0aef8f801aae58cbe85d9127027"} Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.623563 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" event={"ID":"247e948b-3c17-4675-bd1c-f894b02d2817","Type":"ContainerStarted","Data":"7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64"} Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.623584 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" event={"ID":"247e948b-3c17-4675-bd1c-f894b02d2817","Type":"ContainerStarted","Data":"f4ae6f249d84406b535022e86552a2e5916555094ead428fd4bc4d3f7263983b"} Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.625115 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2"} Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.626349 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-flbx5" event={"ID":"4cc3ea69-b881-4fd4-ad4d-42803f27865b","Type":"ContainerStarted","Data":"07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b"} Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.632886 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bsztz" event={"ID":"2a45a199-beeb-4972-b796-15c958fe99d3","Type":"ContainerStarted","Data":"a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2"} Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.634791 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c"} Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.634854 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651"} Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.634868 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"a970a79f27d05e1726f430916427e3c1c23747211a4c62f3528a1df835ec6fdd"} Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.646452 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.659312 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.677073 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.696425 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.709373 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.724791 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.740994 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.758472 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.771540 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.790553 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.831562 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.852011 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.896800 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.909395 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.926089 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.944128 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.955301 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.978109 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:54 crc kubenswrapper[4810]: I0219 15:09:54.992735 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:54Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.007385 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.027107 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.048064 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.061528 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.077032 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.094179 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.108008 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.169935 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.170162 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:09:59.170119848 +0000 UTC m=+28.652149972 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.272413 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.272465 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.272493 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.272525 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.272587 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.272653 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.272669 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.272699 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.272675 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:59.272652646 +0000 UTC m=+28.754682770 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.272713 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.272719 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.272769 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.272787 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.272739 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:59.272717778 +0000 UTC m=+28.754747902 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.272886 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:59.272858051 +0000 UTC m=+28.754888325 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.272917 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 15:09:59.272906452 +0000 UTC m=+28.754936816 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.373636 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:53:00.164239586 +0000 UTC Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.439059 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.439054 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.439241 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:09:55 crc kubenswrapper[4810]: E0219 15:09:55.439396 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.647369 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerStarted","Data":"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92"} Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.647424 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerStarted","Data":"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b"} Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.647434 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerStarted","Data":"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d"} Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.647443 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerStarted","Data":"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1"} Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.647452 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerStarted","Data":"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663"} Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.647461 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerStarted","Data":"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459"} Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.648740 4810 generic.go:334] "Generic (PLEG): container finished" podID="247e948b-3c17-4675-bd1c-f894b02d2817" containerID="7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64" exitCode=0 Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.648882 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" event={"ID":"247e948b-3c17-4675-bd1c-f894b02d2817","Type":"ContainerDied","Data":"7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64"} Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.669785 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.704277 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.729040 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.752977 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.767187 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.781360 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.805080 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.816947 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-t9jnq"] Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.817539 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t9jnq" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.819409 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.819898 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.820380 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.820620 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.820838 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.840576 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.862065 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.881259 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.897933 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.910919 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.925837 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.941472 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.956138 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.981392 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.981566 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/60d7228b-14bd-4988-8dca-cb89f487ba00-serviceca\") pod \"node-ca-t9jnq\" (UID: \"60d7228b-14bd-4988-8dca-cb89f487ba00\") " pod="openshift-image-registry/node-ca-t9jnq" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.981623 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60d7228b-14bd-4988-8dca-cb89f487ba00-host\") pod \"node-ca-t9jnq\" (UID: \"60d7228b-14bd-4988-8dca-cb89f487ba00\") " pod="openshift-image-registry/node-ca-t9jnq" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.981693 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gzvj\" (UniqueName: \"kubernetes.io/projected/60d7228b-14bd-4988-8dca-cb89f487ba00-kube-api-access-5gzvj\") pod \"node-ca-t9jnq\" (UID: \"60d7228b-14bd-4988-8dca-cb89f487ba00\") " pod="openshift-image-registry/node-ca-t9jnq" Feb 19 15:09:55 crc kubenswrapper[4810]: I0219 15:09:55.994303 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:55Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.010395 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.023216 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.035365 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.066429 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.082754 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/60d7228b-14bd-4988-8dca-cb89f487ba00-serviceca\") pod \"node-ca-t9jnq\" (UID: \"60d7228b-14bd-4988-8dca-cb89f487ba00\") " pod="openshift-image-registry/node-ca-t9jnq" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.082836 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60d7228b-14bd-4988-8dca-cb89f487ba00-host\") pod \"node-ca-t9jnq\" (UID: \"60d7228b-14bd-4988-8dca-cb89f487ba00\") " pod="openshift-image-registry/node-ca-t9jnq" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.082897 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gzvj\" (UniqueName: \"kubernetes.io/projected/60d7228b-14bd-4988-8dca-cb89f487ba00-kube-api-access-5gzvj\") pod \"node-ca-t9jnq\" (UID: \"60d7228b-14bd-4988-8dca-cb89f487ba00\") " pod="openshift-image-registry/node-ca-t9jnq" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.083106 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/60d7228b-14bd-4988-8dca-cb89f487ba00-host\") pod \"node-ca-t9jnq\" (UID: \"60d7228b-14bd-4988-8dca-cb89f487ba00\") " pod="openshift-image-registry/node-ca-t9jnq" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.084087 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/60d7228b-14bd-4988-8dca-cb89f487ba00-serviceca\") pod \"node-ca-t9jnq\" (UID: \"60d7228b-14bd-4988-8dca-cb89f487ba00\") " pod="openshift-image-registry/node-ca-t9jnq" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.107541 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.140472 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gzvj\" (UniqueName: \"kubernetes.io/projected/60d7228b-14bd-4988-8dca-cb89f487ba00-kube-api-access-5gzvj\") pod \"node-ca-t9jnq\" (UID: \"60d7228b-14bd-4988-8dca-cb89f487ba00\") " pod="openshift-image-registry/node-ca-t9jnq" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.156883 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-t9jnq" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.167718 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.207290 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.245749 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.297466 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.374798 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:39:22.43422633 +0000 UTC Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.438525 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:09:56 crc kubenswrapper[4810]: E0219 15:09:56.438676 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.464723 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.465688 4810 scope.go:117] "RemoveContainer" containerID="640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65" Feb 19 15:09:56 crc kubenswrapper[4810]: E0219 15:09:56.465970 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.655929 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" event={"ID":"247e948b-3c17-4675-bd1c-f894b02d2817","Type":"ContainerStarted","Data":"395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd"} Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.657117 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t9jnq" event={"ID":"60d7228b-14bd-4988-8dca-cb89f487ba00","Type":"ContainerStarted","Data":"4c381c6531fba5bdfc94af20d71b77afd5e67b0cb6fc49bd037be0b5bf8f26e5"} Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.676302 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.699974 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.711409 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.735923 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.754488 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.768220 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.781406 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.800425 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.823459 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.838116 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.854769 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.867978 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.885184 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:56 crc kubenswrapper[4810]: I0219 15:09:56.896968 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:56Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.375577 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 20:13:24.488323314 +0000 UTC Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.438423 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.438518 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:57 crc kubenswrapper[4810]: E0219 15:09:57.438634 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:09:57 crc kubenswrapper[4810]: E0219 15:09:57.438858 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.662976 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-t9jnq" event={"ID":"60d7228b-14bd-4988-8dca-cb89f487ba00","Type":"ContainerStarted","Data":"cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3"} Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.666747 4810 generic.go:334] "Generic (PLEG): container finished" podID="247e948b-3c17-4675-bd1c-f894b02d2817" containerID="395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd" exitCode=0 Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.666820 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" event={"ID":"247e948b-3c17-4675-bd1c-f894b02d2817","Type":"ContainerDied","Data":"395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd"} Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.688822 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.711146 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.735920 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.759445 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.779854 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.797509 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.816472 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.841707 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.858407 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.875141 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.894094 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.896270 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.898581 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.899003 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.899016 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.899135 4810 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.908694 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.909361 4810 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.909657 4810 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.910788 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.910824 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.910836 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.910852 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.910861 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:57Z","lastTransitionTime":"2026-02-19T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.923718 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: E0219 15:09:57.931302 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.935532 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.935564 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.935572 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.935588 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.935597 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:57Z","lastTransitionTime":"2026-02-19T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.939928 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: E0219 15:09:57.955952 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.960189 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.960249 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.960265 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.960286 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.960300 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:57Z","lastTransitionTime":"2026-02-19T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.964947 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: E0219 15:09:57.973772 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.978834 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.978863 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.978884 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.978903 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.978914 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:57Z","lastTransitionTime":"2026-02-19T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.979149 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: E0219 15:09:57.992388 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.996120 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.996574 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.996603 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.996612 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.996629 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:57 crc kubenswrapper[4810]: I0219 15:09:57.996640 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:57Z","lastTransitionTime":"2026-02-19T15:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:58 crc kubenswrapper[4810]: E0219 15:09:58.008257 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: E0219 15:09:58.008443 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.010231 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.011450 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.011473 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.011482 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.011499 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.011512 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:58Z","lastTransitionTime":"2026-02-19T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.020645 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.034065 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.048472 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.060822 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.078970 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.096075 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.113526 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.113970 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.114011 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.114027 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.114050 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.114067 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:58Z","lastTransitionTime":"2026-02-19T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.133628 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.152486 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.165881 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.216613 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.216652 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.216661 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.216677 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.216689 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:58Z","lastTransitionTime":"2026-02-19T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.320480 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.320543 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.320564 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.320594 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.320615 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:58Z","lastTransitionTime":"2026-02-19T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.375743 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:46:29.279403469 +0000 UTC Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.423400 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.423469 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.423489 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.423521 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.423539 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:58Z","lastTransitionTime":"2026-02-19T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.439208 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:09:58 crc kubenswrapper[4810]: E0219 15:09:58.439403 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.526944 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.526998 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.527014 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.527072 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.527091 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:58Z","lastTransitionTime":"2026-02-19T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.629553 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.629617 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.629636 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.629669 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.629692 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:58Z","lastTransitionTime":"2026-02-19T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.682018 4810 generic.go:334] "Generic (PLEG): container finished" podID="247e948b-3c17-4675-bd1c-f894b02d2817" containerID="f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450" exitCode=0 Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.682142 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" event={"ID":"247e948b-3c17-4675-bd1c-f894b02d2817","Type":"ContainerDied","Data":"f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450"} Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.688086 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerStarted","Data":"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6"} Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.714240 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.729599 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.734036 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.734126 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.734154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.734192 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.734220 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:58Z","lastTransitionTime":"2026-02-19T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.750844 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.764983 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.777217 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.794612 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.810930 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.829244 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.837138 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.837192 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.837206 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.837225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.837239 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:58Z","lastTransitionTime":"2026-02-19T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.848096 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.860012 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.874708 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.890617 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.904922 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.916387 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:58Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.940680 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.940744 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.940757 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.940778 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:58 crc kubenswrapper[4810]: I0219 15:09:58.940791 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:58Z","lastTransitionTime":"2026-02-19T15:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.043956 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.044013 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.044023 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.044046 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.044058 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:59Z","lastTransitionTime":"2026-02-19T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.147458 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.147522 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.147536 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.147558 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.147572 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:59Z","lastTransitionTime":"2026-02-19T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.217486 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.217790 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:10:07.217746311 +0000 UTC m=+36.699776435 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.250714 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.250789 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.250811 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.250842 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.250860 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:59Z","lastTransitionTime":"2026-02-19T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.318776 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.318837 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.318875 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.318915 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.319038 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.319067 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.319103 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:07.319081369 +0000 UTC m=+36.801111493 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.319107 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.319129 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.319143 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.319215 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.319237 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.319258 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.319198 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:07.319173342 +0000 UTC m=+36.801203566 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.319481 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:07.319448289 +0000 UTC m=+36.801478413 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.319500 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:07.31949365 +0000 UTC m=+36.801523774 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.354831 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.354917 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.354941 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.354971 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.354995 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:59Z","lastTransitionTime":"2026-02-19T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.376350 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 05:00:17.173644083 +0000 UTC Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.438391 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.438556 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.438695 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:09:59 crc kubenswrapper[4810]: E0219 15:09:59.438912 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.458187 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.458245 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.458256 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.458278 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.458762 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:59Z","lastTransitionTime":"2026-02-19T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.561673 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.561727 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.561737 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.561780 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.561792 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:59Z","lastTransitionTime":"2026-02-19T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.665169 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.665227 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.665240 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.665261 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.665275 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:59Z","lastTransitionTime":"2026-02-19T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.697375 4810 generic.go:334] "Generic (PLEG): container finished" podID="247e948b-3c17-4675-bd1c-f894b02d2817" containerID="f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158" exitCode=0 Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.697460 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" event={"ID":"247e948b-3c17-4675-bd1c-f894b02d2817","Type":"ContainerDied","Data":"f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158"} Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.716761 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.739751 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.758550 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.768579 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.768613 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.768624 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.768641 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.768652 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:59Z","lastTransitionTime":"2026-02-19T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.775157 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.793772 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.814375 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.829705 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.844674 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.859192 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.873807 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.874055 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.874103 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.874115 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.874136 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.874153 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:59Z","lastTransitionTime":"2026-02-19T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.904741 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.918294 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.931384 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.945275 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:09:59Z is after 2025-08-24T17:21:41Z" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.977972 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.978480 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.978498 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.978527 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:09:59 crc kubenswrapper[4810]: I0219 15:09:59.978545 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:09:59Z","lastTransitionTime":"2026-02-19T15:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.082164 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.082257 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.082282 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.082679 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.082913 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:00Z","lastTransitionTime":"2026-02-19T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.185257 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.185296 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.185307 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.185344 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.185357 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:00Z","lastTransitionTime":"2026-02-19T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.288876 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.288973 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.288996 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.289031 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.289053 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:00Z","lastTransitionTime":"2026-02-19T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.376888 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:56:19.197994159 +0000 UTC Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.393769 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.393819 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.393836 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.393863 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.393882 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:00Z","lastTransitionTime":"2026-02-19T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.439571 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:00 crc kubenswrapper[4810]: E0219 15:10:00.439902 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.497118 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.497190 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.497212 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.497243 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.497261 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:00Z","lastTransitionTime":"2026-02-19T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.600553 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.600671 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.600690 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.600721 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.600744 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:00Z","lastTransitionTime":"2026-02-19T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.703258 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.703297 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.703308 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.703351 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.703364 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:00Z","lastTransitionTime":"2026-02-19T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.706123 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" event={"ID":"247e948b-3c17-4675-bd1c-f894b02d2817","Type":"ContainerStarted","Data":"301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba"} Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.716733 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerStarted","Data":"2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7"} Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.717314 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.717357 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.717371 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.726248 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.742822 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.764612 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.769806 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.783580 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.788755 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.807410 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.807466 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.807479 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.807501 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.807521 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:00Z","lastTransitionTime":"2026-02-19T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.821375 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.837993 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.853367 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.867467 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.884600 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.912035 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.912093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.912111 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.912138 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.912156 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:00Z","lastTransitionTime":"2026-02-19T15:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.918963 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.937913 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.954986 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.972006 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:00 crc kubenswrapper[4810]: I0219 15:10:00.992143 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:00Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.015306 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.015387 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.015406 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.015430 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.015449 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:01Z","lastTransitionTime":"2026-02-19T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.029728 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.046868 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.065809 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.082683 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.101371 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.116999 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.118781 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.118829 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.118843 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.118865 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.118881 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:01Z","lastTransitionTime":"2026-02-19T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.134671 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.152809 4810 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.154284 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-58b4c7f79c-55gtf/status\": read tcp 38.102.83.162:42874->38.102.83.162:6443: use of closed network connection" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.188019 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.202295 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.221876 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.221914 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.221926 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.221943 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.221957 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:01Z","lastTransitionTime":"2026-02-19T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.227901 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.241905 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.259923 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.282253 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.325046 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.325093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.325104 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.325122 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.325133 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:01Z","lastTransitionTime":"2026-02-19T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.377346 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:24:38.218574451 +0000 UTC Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.428030 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.428098 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.428112 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.428136 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.428153 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:01Z","lastTransitionTime":"2026-02-19T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.438312 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.438391 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:01 crc kubenswrapper[4810]: E0219 15:10:01.438476 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:01 crc kubenswrapper[4810]: E0219 15:10:01.438676 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.464764 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.478834 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.498248 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.514616 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.531534 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.531601 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.531620 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.531646 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.531662 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:01Z","lastTransitionTime":"2026-02-19T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.532084 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.553263 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.587639 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.618463 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.634662 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.634712 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.634726 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.634749 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.634770 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:01Z","lastTransitionTime":"2026-02-19T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.643368 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.659738 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.680417 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.697442 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.710438 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.723382 4810 generic.go:334] "Generic (PLEG): container finished" podID="247e948b-3c17-4675-bd1c-f894b02d2817" containerID="301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba" exitCode=0 Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.723478 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" event={"ID":"247e948b-3c17-4675-bd1c-f894b02d2817","Type":"ContainerDied","Data":"301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba"} Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.726637 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.737117 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.737186 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.737201 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.737224 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.737238 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:01Z","lastTransitionTime":"2026-02-19T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.750462 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.773855 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.794986 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.816190 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.837272 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.841113 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.841144 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.841154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.841174 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.841188 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:01Z","lastTransitionTime":"2026-02-19T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.850677 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.866453 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.880084 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.894211 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.906388 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.930756 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.945125 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.945177 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.945188 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.945209 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.945222 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:01Z","lastTransitionTime":"2026-02-19T15:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.950619 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.964531 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:01 crc kubenswrapper[4810]: I0219 15:10:01.981018 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:01Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.048010 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.048057 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.048068 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.048085 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.048098 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:02Z","lastTransitionTime":"2026-02-19T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.151790 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.151863 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.151881 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.151907 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.151927 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:02Z","lastTransitionTime":"2026-02-19T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.254961 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.255022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.255038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.255065 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.255083 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:02Z","lastTransitionTime":"2026-02-19T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.358407 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.358477 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.358492 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.358515 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.358531 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:02Z","lastTransitionTime":"2026-02-19T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.378404 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:20:48.312009102 +0000 UTC Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.438728 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:02 crc kubenswrapper[4810]: E0219 15:10:02.438886 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.462287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.462341 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.462352 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.462370 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.462383 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:02Z","lastTransitionTime":"2026-02-19T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.565072 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.565154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.565178 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.565229 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.565256 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:02Z","lastTransitionTime":"2026-02-19T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.668667 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.668739 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.668762 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.668798 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.668821 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:02Z","lastTransitionTime":"2026-02-19T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.736600 4810 generic.go:334] "Generic (PLEG): container finished" podID="247e948b-3c17-4675-bd1c-f894b02d2817" containerID="075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e" exitCode=0 Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.738526 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" event={"ID":"247e948b-3c17-4675-bd1c-f894b02d2817","Type":"ContainerDied","Data":"075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e"} Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.764552 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.772590 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.772638 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.772650 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.772669 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.772687 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:02Z","lastTransitionTime":"2026-02-19T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.797013 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.814623 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.835389 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.851449 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.869380 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.876622 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.876691 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.876718 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.876755 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.876785 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:02Z","lastTransitionTime":"2026-02-19T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.889705 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.905099 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.917954 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.935885 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.953743 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.962993 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.978602 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.980157 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.980231 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.980247 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.980269 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.980308 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:02Z","lastTransitionTime":"2026-02-19T15:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:02 crc kubenswrapper[4810]: I0219 15:10:02.994281 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.083923 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.083980 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.083988 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.084005 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.084013 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:03Z","lastTransitionTime":"2026-02-19T15:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.187200 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.187256 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.187275 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.187305 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.187359 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:03Z","lastTransitionTime":"2026-02-19T15:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.290173 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.290225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.290241 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.290261 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.290277 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:03Z","lastTransitionTime":"2026-02-19T15:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.378832 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:23:56.409654138 +0000 UTC Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.392518 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.392554 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.392565 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.392582 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.392594 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:03Z","lastTransitionTime":"2026-02-19T15:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.445552 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:03 crc kubenswrapper[4810]: E0219 15:10:03.445679 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.445553 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:03 crc kubenswrapper[4810]: E0219 15:10:03.445810 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.495610 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.495656 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.495673 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.495699 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.495715 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:03Z","lastTransitionTime":"2026-02-19T15:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.599061 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.599109 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.599128 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.599153 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.599170 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:03Z","lastTransitionTime":"2026-02-19T15:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.702140 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.702194 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.702212 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.702235 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.702249 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:03Z","lastTransitionTime":"2026-02-19T15:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.747560 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" event={"ID":"247e948b-3c17-4675-bd1c-f894b02d2817","Type":"ContainerStarted","Data":"7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354"} Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.778866 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:03Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.798988 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:03Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.805606 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.805670 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.805690 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.805718 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.805739 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:03Z","lastTransitionTime":"2026-02-19T15:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.819057 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:03Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.844375 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:03Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.865530 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:03Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.890886 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:03Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.908821 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.908871 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.908886 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.908908 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.908923 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:03Z","lastTransitionTime":"2026-02-19T15:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.908990 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:03Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.929382 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:03Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.948735 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:03Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.966188 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:03Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.985214 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:03Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:03 crc kubenswrapper[4810]: I0219 15:10:03.999490 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:03Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.011592 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.011645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.011662 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.011686 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.011703 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:04Z","lastTransitionTime":"2026-02-19T15:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.017398 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.028378 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.114736 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.114780 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.114793 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.114811 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.114823 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:04Z","lastTransitionTime":"2026-02-19T15:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.217171 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.217209 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.217219 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.217238 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.217250 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:04Z","lastTransitionTime":"2026-02-19T15:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.320217 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.320303 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.320342 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.320368 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.320394 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:04Z","lastTransitionTime":"2026-02-19T15:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.379971 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:58:51.861305569 +0000 UTC Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.423315 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.423376 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.423392 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.423425 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.423440 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:04Z","lastTransitionTime":"2026-02-19T15:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.439247 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:04 crc kubenswrapper[4810]: E0219 15:10:04.439402 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.531741 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.531924 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.532105 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.532950 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.533026 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:04Z","lastTransitionTime":"2026-02-19T15:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.636493 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.636576 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.636600 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.636635 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.636659 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:04Z","lastTransitionTime":"2026-02-19T15:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.739297 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.739418 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.739489 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.739513 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.739527 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:04Z","lastTransitionTime":"2026-02-19T15:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.753909 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/0.log" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.757634 4810 generic.go:334] "Generic (PLEG): container finished" podID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerID="2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7" exitCode=1 Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.757683 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7"} Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.758828 4810 scope.go:117] "RemoveContainer" containerID="2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.787653 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.811532 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.828452 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.842773 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.842860 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.842882 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.843401 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.843700 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:04Z","lastTransitionTime":"2026-02-19T15:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.859698 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:04Z\\\",\\\"message\\\":\\\" 15:10:04.039820 6046 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040242 6046 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040279 6046 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 15:10:04.040252 6046 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040971 6046 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 15:10:04.040995 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 15:10:04.041001 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 15:10:04.041012 6046 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 15:10:04.041033 6046 factory.go:656] Stopping watch factory\\\\nI0219 15:10:04.041045 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0219 15:10:04.041064 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 15:10:04.041070 6046 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 15:10:04.041076 6046 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.875136 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.887932 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.906247 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.925004 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.945628 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.947014 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.947048 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.947061 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.947080 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.947093 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:04Z","lastTransitionTime":"2026-02-19T15:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.959881 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.974620 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:04 crc kubenswrapper[4810]: I0219 15:10:04.989812 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:04Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.004349 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.018300 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.050076 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.050763 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.050964 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.051190 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.051453 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:05Z","lastTransitionTime":"2026-02-19T15:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.155170 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.155236 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.155255 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.155287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.155306 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:05Z","lastTransitionTime":"2026-02-19T15:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.257899 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.257976 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.257995 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.258026 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.258048 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:05Z","lastTransitionTime":"2026-02-19T15:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.361944 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.362037 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.362063 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.362095 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.362119 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:05Z","lastTransitionTime":"2026-02-19T15:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.381399 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:08:41.853977983 +0000 UTC Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.438546 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.438636 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:05 crc kubenswrapper[4810]: E0219 15:10:05.438745 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:05 crc kubenswrapper[4810]: E0219 15:10:05.438897 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.465053 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.465114 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.465137 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.465170 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.465192 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:05Z","lastTransitionTime":"2026-02-19T15:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.567996 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.568038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.568050 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.568070 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.568086 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:05Z","lastTransitionTime":"2026-02-19T15:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.670806 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.670845 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.670857 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.670875 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.670892 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:05Z","lastTransitionTime":"2026-02-19T15:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.763303 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/0.log" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.766473 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerStarted","Data":"86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4"} Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.767043 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.778040 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.778088 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.778107 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.778127 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.778139 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:05Z","lastTransitionTime":"2026-02-19T15:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.789943 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.822896 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.837544 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.849699 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.868680 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:04Z\\\",\\\"message\\\":\\\" 15:10:04.039820 6046 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040242 6046 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040279 6046 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 15:10:04.040252 6046 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040971 6046 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 15:10:04.040995 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 15:10:04.041001 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 15:10:04.041012 6046 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 15:10:04.041033 6046 factory.go:656] Stopping watch factory\\\\nI0219 15:10:04.041045 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0219 15:10:04.041064 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 15:10:04.041070 6046 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 15:10:04.041076 6046 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.879856 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.893556 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.909227 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.909286 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.909297 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.909317 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.909344 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:05Z","lastTransitionTime":"2026-02-19T15:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.911927 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.926929 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.941201 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.961556 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.975134 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:05 crc kubenswrapper[4810]: I0219 15:10:05.989003 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:05Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.005775 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.011730 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.011777 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.011788 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.011806 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.011818 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:06Z","lastTransitionTime":"2026-02-19T15:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.114836 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.114894 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.114905 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.114923 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.114935 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:06Z","lastTransitionTime":"2026-02-19T15:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.182257 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t"] Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.182830 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.186932 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.187166 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.201080 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.209990 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06adf9a1-ec31-4acc-9864-41549913d3f4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l2g4t\" (UID: \"06adf9a1-ec31-4acc-9864-41549913d3f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.210046 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06adf9a1-ec31-4acc-9864-41549913d3f4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l2g4t\" (UID: \"06adf9a1-ec31-4acc-9864-41549913d3f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.210093 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh6qr\" (UniqueName: \"kubernetes.io/projected/06adf9a1-ec31-4acc-9864-41549913d3f4-kube-api-access-nh6qr\") pod \"ovnkube-control-plane-749d76644c-l2g4t\" (UID: \"06adf9a1-ec31-4acc-9864-41549913d3f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.210372 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06adf9a1-ec31-4acc-9864-41549913d3f4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l2g4t\" (UID: \"06adf9a1-ec31-4acc-9864-41549913d3f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.214447 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.217879 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.217905 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.217915 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.217933 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.217948 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:06Z","lastTransitionTime":"2026-02-19T15:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.237135 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.258529 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.273752 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.284481 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.299430 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.311422 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06adf9a1-ec31-4acc-9864-41549913d3f4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l2g4t\" (UID: \"06adf9a1-ec31-4acc-9864-41549913d3f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.311499 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06adf9a1-ec31-4acc-9864-41549913d3f4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l2g4t\" (UID: \"06adf9a1-ec31-4acc-9864-41549913d3f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.311544 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06adf9a1-ec31-4acc-9864-41549913d3f4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l2g4t\" (UID: \"06adf9a1-ec31-4acc-9864-41549913d3f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.311565 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh6qr\" (UniqueName: \"kubernetes.io/projected/06adf9a1-ec31-4acc-9864-41549913d3f4-kube-api-access-nh6qr\") pod \"ovnkube-control-plane-749d76644c-l2g4t\" (UID: \"06adf9a1-ec31-4acc-9864-41549913d3f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.312284 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06adf9a1-ec31-4acc-9864-41549913d3f4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-l2g4t\" (UID: \"06adf9a1-ec31-4acc-9864-41549913d3f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.312913 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06adf9a1-ec31-4acc-9864-41549913d3f4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-l2g4t\" (UID: \"06adf9a1-ec31-4acc-9864-41549913d3f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.315574 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.324960 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06adf9a1-ec31-4acc-9864-41549913d3f4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-l2g4t\" (UID: \"06adf9a1-ec31-4acc-9864-41549913d3f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.325165 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.325189 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.325201 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.325223 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.325239 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:06Z","lastTransitionTime":"2026-02-19T15:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.332691 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.344077 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh6qr\" (UniqueName: \"kubernetes.io/projected/06adf9a1-ec31-4acc-9864-41549913d3f4-kube-api-access-nh6qr\") pod \"ovnkube-control-plane-749d76644c-l2g4t\" (UID: \"06adf9a1-ec31-4acc-9864-41549913d3f4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.349189 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.360395 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.379203 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:04Z\\\",\\\"message\\\":\\\" 15:10:04.039820 6046 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040242 6046 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040279 6046 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 15:10:04.040252 6046 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040971 6046 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 15:10:04.040995 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 15:10:04.041001 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 15:10:04.041012 6046 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 15:10:04.041033 6046 factory.go:656] Stopping watch factory\\\\nI0219 15:10:04.041045 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0219 15:10:04.041064 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 15:10:04.041070 6046 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 15:10:04.041076 6046 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.381636 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:53:20.096161678 +0000 UTC Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.396216 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.411486 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.423939 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.427922 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.427946 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.427956 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.427974 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.427984 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:06Z","lastTransitionTime":"2026-02-19T15:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.438396 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:06 crc kubenswrapper[4810]: E0219 15:10:06.438509 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.501725 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" Feb 19 15:10:06 crc kubenswrapper[4810]: W0219 15:10:06.518693 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06adf9a1_ec31_4acc_9864_41549913d3f4.slice/crio-c579edde8952a080f589ba913be0a30a6a895f762bee2bb74d70e89b7bea69cf WatchSource:0}: Error finding container c579edde8952a080f589ba913be0a30a6a895f762bee2bb74d70e89b7bea69cf: Status 404 returned error can't find the container with id c579edde8952a080f589ba913be0a30a6a895f762bee2bb74d70e89b7bea69cf Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.539127 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.539193 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.539218 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.539251 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.539273 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:06Z","lastTransitionTime":"2026-02-19T15:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.643418 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.643465 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.643476 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.643497 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.643509 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:06Z","lastTransitionTime":"2026-02-19T15:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.746955 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.747017 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.747036 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.747065 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.747085 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:06Z","lastTransitionTime":"2026-02-19T15:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.771617 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/1.log" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.772232 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/0.log" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.775851 4810 generic.go:334] "Generic (PLEG): container finished" podID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerID="86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4" exitCode=1 Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.775910 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4"} Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.776013 4810 scope.go:117] "RemoveContainer" containerID="2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.776777 4810 scope.go:117] "RemoveContainer" containerID="86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4" Feb 19 15:10:06 crc kubenswrapper[4810]: E0219 15:10:06.777027 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.778275 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" event={"ID":"06adf9a1-ec31-4acc-9864-41549913d3f4","Type":"ContainerStarted","Data":"c579edde8952a080f589ba913be0a30a6a895f762bee2bb74d70e89b7bea69cf"} Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.793858 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.812560 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.833412 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.850711 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.850748 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.850768 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.850796 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.850816 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:06Z","lastTransitionTime":"2026-02-19T15:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.852297 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.870401 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.901567 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:04Z\\\",\\\"message\\\":\\\" 15:10:04.039820 6046 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040242 6046 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040279 6046 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 15:10:04.040252 6046 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040971 6046 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 15:10:04.040995 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 15:10:04.041001 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 15:10:04.041012 6046 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 15:10:04.041033 6046 factory.go:656] Stopping watch factory\\\\nI0219 15:10:04.041045 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0219 15:10:04.041064 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 15:10:04.041070 6046 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 15:10:04.041076 6046 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"ns:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:06.221628 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 15:10:06.223394 6231 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 15:10:06.223450 6231 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.919290 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.936874 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.953164 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.953225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.953241 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.953267 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.953286 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:06Z","lastTransitionTime":"2026-02-19T15:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.953470 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.970340 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:06 crc kubenswrapper[4810]: I0219 15:10:06.984489 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.000672 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:06Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.020921 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.049081 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.056660 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.056732 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.056752 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.056781 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.056807 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:07Z","lastTransitionTime":"2026-02-19T15:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.075886 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.160298 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.160382 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.160399 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.160424 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.160445 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:07Z","lastTransitionTime":"2026-02-19T15:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.220135 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.220401 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:10:23.220356824 +0000 UTC m=+52.702386978 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.264016 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.264082 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.264099 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.264122 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.264135 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:07Z","lastTransitionTime":"2026-02-19T15:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.320846 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.320927 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.320970 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.321005 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.321086 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.321151 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.321174 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.321187 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:23.321158449 +0000 UTC m=+52.803188613 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.321198 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.321235 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.321375 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.321435 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.321455 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.321257 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:23.321239691 +0000 UTC m=+52.803269845 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.321567 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:23.321540519 +0000 UTC m=+52.803570643 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.321583 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:23.32157641 +0000 UTC m=+52.803606534 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.367847 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.367894 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.367906 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.367925 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.367940 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:07Z","lastTransitionTime":"2026-02-19T15:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.382054 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:29:22.442629155 +0000 UTC Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.439086 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.439201 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.439462 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.439644 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.471093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.471151 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.471169 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.471196 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.471215 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:07Z","lastTransitionTime":"2026-02-19T15:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.579143 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.579242 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.579269 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.579306 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.579379 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:07Z","lastTransitionTime":"2026-02-19T15:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.684505 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.684589 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.684610 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.684639 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.684661 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:07Z","lastTransitionTime":"2026-02-19T15:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.692812 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2x9v9"] Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.694154 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.694362 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.715703 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.726273 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z7x5\" (UniqueName: \"kubernetes.io/projected/b72d3f7a-e418-4a21-af73-6a43ce3358c1-kube-api-access-7z7x5\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.726406 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.741619 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.763154 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.787844 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.788241 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.788283 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.788300 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.788359 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.788392 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:07Z","lastTransitionTime":"2026-02-19T15:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.791037 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/1.log" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.799004 4810 scope.go:117] "RemoveContainer" containerID="86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4" Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.799239 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.801388 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" event={"ID":"06adf9a1-ec31-4acc-9864-41549913d3f4","Type":"ContainerStarted","Data":"82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4"} Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.801434 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" event={"ID":"06adf9a1-ec31-4acc-9864-41549913d3f4","Type":"ContainerStarted","Data":"618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa"} Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.807398 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.827226 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.827579 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.827774 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z7x5\" (UniqueName: \"kubernetes.io/projected/b72d3f7a-e418-4a21-af73-6a43ce3358c1-kube-api-access-7z7x5\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.828082 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:07 crc kubenswrapper[4810]: E0219 15:10:07.828195 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs podName:b72d3f7a-e418-4a21-af73-6a43ce3358c1 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:08.328162368 +0000 UTC m=+37.810192502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs") pod "network-metrics-daemon-2x9v9" (UID: "b72d3f7a-e418-4a21-af73-6a43ce3358c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.848046 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.858964 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z7x5\" (UniqueName: \"kubernetes.io/projected/b72d3f7a-e418-4a21-af73-6a43ce3358c1-kube-api-access-7z7x5\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.867227 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.884689 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.891497 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.891555 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.891577 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.891607 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.891629 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:07Z","lastTransitionTime":"2026-02-19T15:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.921589 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f04be3b9d3a27ffadd2a55f3b4cea853da683af5f844ec4c3ad6cbbc548add7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:04Z\\\",\\\"message\\\":\\\" 15:10:04.039820 6046 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040242 6046 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040279 6046 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 15:10:04.040252 6046 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:04.040971 6046 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 15:10:04.040995 6046 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 15:10:04.041001 6046 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 15:10:04.041012 6046 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 15:10:04.041033 6046 factory.go:656] Stopping watch factory\\\\nI0219 15:10:04.041045 6046 ovnkube.go:599] Stopped ovnkube\\\\nI0219 15:10:04.041064 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 15:10:04.041070 6046 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 15:10:04.041076 6046 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"ns:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:06.221628 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 15:10:06.223394 6231 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 15:10:06.223450 6231 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.942437 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.962267 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.983086 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:07Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.996226 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.996318 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.996402 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.996444 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:07 crc kubenswrapper[4810]: I0219 15:10:07.996468 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:07Z","lastTransitionTime":"2026-02-19T15:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.003972 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.023592 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.045872 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.082300 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.099825 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.099896 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.099913 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.099935 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.099948 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:08Z","lastTransitionTime":"2026-02-19T15:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.108590 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.144638 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.165441 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.181988 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.204106 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.204181 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.204197 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.204225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.204240 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:08Z","lastTransitionTime":"2026-02-19T15:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.209018 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.224432 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.237366 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.249796 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.263133 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.275959 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.293551 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.309215 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.309292 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.309316 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.309370 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.309425 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:08Z","lastTransitionTime":"2026-02-19T15:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.316006 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.332869 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:08 crc kubenswrapper[4810]: E0219 15:10:08.333167 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:08 crc kubenswrapper[4810]: E0219 15:10:08.333312 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs podName:b72d3f7a-e418-4a21-af73-6a43ce3358c1 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:09.333279172 +0000 UTC m=+38.815309516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs") pod "network-metrics-daemon-2x9v9" (UID: "b72d3f7a-e418-4a21-af73-6a43ce3358c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.334878 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.353026 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.382179 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"ns:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:06.221628 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 15:10:06.223394 6231 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 15:10:06.223450 6231 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.382411 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:06:43.854130182 +0000 UTC Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.383659 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.383709 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.383726 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.383750 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.383763 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:08Z","lastTransitionTime":"2026-02-19T15:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:08 crc kubenswrapper[4810]: E0219 15:10:08.403459 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.408930 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.408963 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.408975 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.409001 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.409016 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:08Z","lastTransitionTime":"2026-02-19T15:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:08 crc kubenswrapper[4810]: E0219 15:10:08.427941 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.432422 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.432455 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.432467 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.432487 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.432499 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:08Z","lastTransitionTime":"2026-02-19T15:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.439132 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:08 crc kubenswrapper[4810]: E0219 15:10:08.439315 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:08 crc kubenswrapper[4810]: E0219 15:10:08.455268 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.461037 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.461140 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.461163 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.461195 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.461220 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:08Z","lastTransitionTime":"2026-02-19T15:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:08 crc kubenswrapper[4810]: E0219 15:10:08.484022 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.489471 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.489545 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.489570 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.489603 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.489630 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:08Z","lastTransitionTime":"2026-02-19T15:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:08 crc kubenswrapper[4810]: E0219 15:10:08.508593 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:08Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:08 crc kubenswrapper[4810]: E0219 15:10:08.508963 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.511522 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.511580 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.511596 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.511622 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.511638 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:08Z","lastTransitionTime":"2026-02-19T15:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.615173 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.615246 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.615264 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.615297 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.615320 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:08Z","lastTransitionTime":"2026-02-19T15:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.719180 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.719285 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.719305 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.719414 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.719448 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:08Z","lastTransitionTime":"2026-02-19T15:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.822683 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.822745 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.822770 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.822798 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.822820 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:08Z","lastTransitionTime":"2026-02-19T15:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.926522 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.926607 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.926630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.926660 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:08 crc kubenswrapper[4810]: I0219 15:10:08.926685 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:08Z","lastTransitionTime":"2026-02-19T15:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.029789 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.029870 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.029889 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.029916 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.029935 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:09Z","lastTransitionTime":"2026-02-19T15:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.133453 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.133510 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.133533 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.133562 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.133585 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:09Z","lastTransitionTime":"2026-02-19T15:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.237010 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.237051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.237066 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.237087 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.237100 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:09Z","lastTransitionTime":"2026-02-19T15:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.340650 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.340712 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.340736 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.340757 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.340771 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:09Z","lastTransitionTime":"2026-02-19T15:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.343353 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:09 crc kubenswrapper[4810]: E0219 15:10:09.343474 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:09 crc kubenswrapper[4810]: E0219 15:10:09.343561 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs podName:b72d3f7a-e418-4a21-af73-6a43ce3358c1 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:11.343538949 +0000 UTC m=+40.825569083 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs") pod "network-metrics-daemon-2x9v9" (UID: "b72d3f7a-e418-4a21-af73-6a43ce3358c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.382880 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:05:28.219219383 +0000 UTC Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.438953 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.439008 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.439046 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:09 crc kubenswrapper[4810]: E0219 15:10:09.439219 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:09 crc kubenswrapper[4810]: E0219 15:10:09.439416 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:09 crc kubenswrapper[4810]: E0219 15:10:09.439541 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.444120 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.444164 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.444180 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.444205 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.444223 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:09Z","lastTransitionTime":"2026-02-19T15:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.550396 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.550480 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.550498 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.550530 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.550549 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:09Z","lastTransitionTime":"2026-02-19T15:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.654079 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.654158 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.654177 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.654210 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.654229 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:09Z","lastTransitionTime":"2026-02-19T15:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.757544 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.757603 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.757621 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.757648 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.757671 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:09Z","lastTransitionTime":"2026-02-19T15:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.861712 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.861772 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.861795 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.861826 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.861848 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:09Z","lastTransitionTime":"2026-02-19T15:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.964944 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.964989 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.964999 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.965017 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:09 crc kubenswrapper[4810]: I0219 15:10:09.965027 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:09Z","lastTransitionTime":"2026-02-19T15:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.068439 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.068897 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.069046 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.069196 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.069402 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:10Z","lastTransitionTime":"2026-02-19T15:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.172691 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.172751 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.172771 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.172799 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.172830 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:10Z","lastTransitionTime":"2026-02-19T15:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.276202 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.276269 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.276288 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.276318 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.276363 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:10Z","lastTransitionTime":"2026-02-19T15:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.380378 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.380458 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.380479 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.380507 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.380526 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:10Z","lastTransitionTime":"2026-02-19T15:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.383436 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:09:28.197529938 +0000 UTC Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.438257 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:10 crc kubenswrapper[4810]: E0219 15:10:10.438453 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.439269 4810 scope.go:117] "RemoveContainer" containerID="640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.483545 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.483620 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.483646 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.483679 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.483703 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:10Z","lastTransitionTime":"2026-02-19T15:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.588140 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.588232 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.588254 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.588290 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.588311 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:10Z","lastTransitionTime":"2026-02-19T15:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.691732 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.692004 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.692018 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.692039 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.692052 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:10Z","lastTransitionTime":"2026-02-19T15:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.795252 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.795304 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.795321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.795378 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.795408 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:10Z","lastTransitionTime":"2026-02-19T15:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.816316 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.819359 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e"} Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.898281 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.898364 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.898382 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.898410 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:10 crc kubenswrapper[4810]: I0219 15:10:10.898427 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:10Z","lastTransitionTime":"2026-02-19T15:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.001784 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.001857 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.001880 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.001912 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.001934 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:11Z","lastTransitionTime":"2026-02-19T15:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.106418 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.106486 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.106511 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.106542 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.106560 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:11Z","lastTransitionTime":"2026-02-19T15:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.210187 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.210262 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.210281 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.210309 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.210353 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:11Z","lastTransitionTime":"2026-02-19T15:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.314308 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.314404 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.314422 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.314450 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.314474 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:11Z","lastTransitionTime":"2026-02-19T15:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.368647 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:11 crc kubenswrapper[4810]: E0219 15:10:11.368882 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:11 crc kubenswrapper[4810]: E0219 15:10:11.369276 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs podName:b72d3f7a-e418-4a21-af73-6a43ce3358c1 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:15.369247972 +0000 UTC m=+44.851278116 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs") pod "network-metrics-daemon-2x9v9" (UID: "b72d3f7a-e418-4a21-af73-6a43ce3358c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.384563 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 04:21:49.134226845 +0000 UTC Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.417066 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.417150 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.417170 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.417197 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.417218 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:11Z","lastTransitionTime":"2026-02-19T15:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.438573 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.438614 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.438684 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:11 crc kubenswrapper[4810]: E0219 15:10:11.439543 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:11 crc kubenswrapper[4810]: E0219 15:10:11.439634 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:11 crc kubenswrapper[4810]: E0219 15:10:11.439316 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.468251 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.488635 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.508299 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.520970 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.521019 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.521038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.521063 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.521086 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:11Z","lastTransitionTime":"2026-02-19T15:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.530661 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.550964 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.578981 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.595322 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.612628 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.624643 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.624716 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.624740 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.624770 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.624792 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:11Z","lastTransitionTime":"2026-02-19T15:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.633300 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.650875 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.672954 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.697430 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.713793 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.727150 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.727186 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.727197 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.727216 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.727228 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:11Z","lastTransitionTime":"2026-02-19T15:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.731404 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.744703 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.767907 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"ns:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:06.221628 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 15:10:06.223394 6231 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 15:10:06.223450 6231 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.829780 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.829845 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.829863 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.829894 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.829913 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:11Z","lastTransitionTime":"2026-02-19T15:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.842229 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.854703 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.876763 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"ns:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:06.221628 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 15:10:06.223394 6231 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 15:10:06.223450 6231 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.892248 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.911594 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.928277 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.936103 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.936160 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.936179 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.936214 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.936228 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:11Z","lastTransitionTime":"2026-02-19T15:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.954574 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.973565 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:11 crc kubenswrapper[4810]: I0219 15:10:11.991129 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:11Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.009931 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:12Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.033107 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:12Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.039715 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.039771 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.039788 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.039815 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.039835 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:12Z","lastTransitionTime":"2026-02-19T15:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.054286 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:12Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.072752 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:12Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.088609 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:12Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.110070 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:12Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.126843 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:12Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.145354 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.145814 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.145889 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.146001 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.146071 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:12Z","lastTransitionTime":"2026-02-19T15:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.250095 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.250154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.250163 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.250188 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.250200 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:12Z","lastTransitionTime":"2026-02-19T15:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.353285 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.353376 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.353395 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.353424 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.353444 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:12Z","lastTransitionTime":"2026-02-19T15:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.385537 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 02:38:58.357487677 +0000 UTC Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.439056 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:12 crc kubenswrapper[4810]: E0219 15:10:12.439590 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.457304 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.457409 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.457422 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.457444 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.457459 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:12Z","lastTransitionTime":"2026-02-19T15:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.561513 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.561574 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.561589 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.561613 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.561632 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:12Z","lastTransitionTime":"2026-02-19T15:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.666188 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.666267 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.666289 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.666358 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.666384 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:12Z","lastTransitionTime":"2026-02-19T15:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.770295 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.770402 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.770428 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.770459 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.770480 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:12Z","lastTransitionTime":"2026-02-19T15:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.874046 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.874116 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.874226 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.874283 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.874310 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:12Z","lastTransitionTime":"2026-02-19T15:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.977430 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.977504 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.977524 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.977554 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:12 crc kubenswrapper[4810]: I0219 15:10:12.977573 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:12Z","lastTransitionTime":"2026-02-19T15:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.080886 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.080957 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.080981 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.081009 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.081031 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:13Z","lastTransitionTime":"2026-02-19T15:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.185236 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.185363 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.185390 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.185424 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.185447 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:13Z","lastTransitionTime":"2026-02-19T15:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.290995 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.291064 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.291082 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.291110 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.291130 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:13Z","lastTransitionTime":"2026-02-19T15:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.386694 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:08:54.440407628 +0000 UTC Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.395040 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.395111 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.395126 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.395148 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.395163 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:13Z","lastTransitionTime":"2026-02-19T15:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.439668 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.439830 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:13 crc kubenswrapper[4810]: E0219 15:10:13.439870 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.439882 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:13 crc kubenswrapper[4810]: E0219 15:10:13.440176 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:13 crc kubenswrapper[4810]: E0219 15:10:13.440426 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.497607 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.497668 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.497683 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.497710 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.497726 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:13Z","lastTransitionTime":"2026-02-19T15:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.600475 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.600534 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.600547 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.600569 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.600581 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:13Z","lastTransitionTime":"2026-02-19T15:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.703126 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.703190 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.703213 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.703240 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.703284 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:13Z","lastTransitionTime":"2026-02-19T15:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.807305 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.807437 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.807465 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.807505 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.807532 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:13Z","lastTransitionTime":"2026-02-19T15:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.910837 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.910977 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.911018 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.911061 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:13 crc kubenswrapper[4810]: I0219 15:10:13.911088 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:13Z","lastTransitionTime":"2026-02-19T15:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.014924 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.015367 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.015576 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.015845 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.016003 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:14Z","lastTransitionTime":"2026-02-19T15:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.119949 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.120019 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.120042 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.120075 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.120097 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:14Z","lastTransitionTime":"2026-02-19T15:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.223011 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.223051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.223063 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.223081 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.223093 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:14Z","lastTransitionTime":"2026-02-19T15:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.325853 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.325896 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.325908 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.325927 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.325938 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:14Z","lastTransitionTime":"2026-02-19T15:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.388240 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:01:39.702338114 +0000 UTC Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.429706 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.429758 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.429769 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.429789 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.429830 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:14Z","lastTransitionTime":"2026-02-19T15:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.438573 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:14 crc kubenswrapper[4810]: E0219 15:10:14.438732 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.532133 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.532183 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.532195 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.532221 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.532239 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:14Z","lastTransitionTime":"2026-02-19T15:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.635969 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.636062 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.636080 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.636108 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.636126 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:14Z","lastTransitionTime":"2026-02-19T15:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.739451 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.739527 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.739545 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.739580 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.739601 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:14Z","lastTransitionTime":"2026-02-19T15:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.843093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.843160 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.843178 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.843206 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.843221 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:14Z","lastTransitionTime":"2026-02-19T15:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.947614 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.947697 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.947715 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.947751 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:14 crc kubenswrapper[4810]: I0219 15:10:14.947770 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:14Z","lastTransitionTime":"2026-02-19T15:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.050952 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.051028 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.051044 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.051069 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.051088 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:15Z","lastTransitionTime":"2026-02-19T15:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.085189 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.154561 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.154714 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.154739 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.154771 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.154791 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:15Z","lastTransitionTime":"2026-02-19T15:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.257831 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.257887 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.257903 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.257923 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.257937 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:15Z","lastTransitionTime":"2026-02-19T15:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.361924 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.361997 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.362011 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.362033 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.362048 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:15Z","lastTransitionTime":"2026-02-19T15:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.389512 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 12:23:44.427192891 +0000 UTC Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.427462 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:15 crc kubenswrapper[4810]: E0219 15:10:15.427764 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:15 crc kubenswrapper[4810]: E0219 15:10:15.427913 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs podName:b72d3f7a-e418-4a21-af73-6a43ce3358c1 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:23.427877948 +0000 UTC m=+52.909908112 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs") pod "network-metrics-daemon-2x9v9" (UID: "b72d3f7a-e418-4a21-af73-6a43ce3358c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.438532 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.438662 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.438655 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:15 crc kubenswrapper[4810]: E0219 15:10:15.438882 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:15 crc kubenswrapper[4810]: E0219 15:10:15.439073 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:15 crc kubenswrapper[4810]: E0219 15:10:15.439643 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.465829 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.465943 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.466016 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.466059 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.466101 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:15Z","lastTransitionTime":"2026-02-19T15:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.569307 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.569385 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.569400 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.569423 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.569440 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:15Z","lastTransitionTime":"2026-02-19T15:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.672588 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.672656 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.672867 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.672901 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.672924 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:15Z","lastTransitionTime":"2026-02-19T15:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.775796 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.775863 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.775883 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.775910 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.775927 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:15Z","lastTransitionTime":"2026-02-19T15:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.878780 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.878836 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.878850 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.878877 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.878897 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:15Z","lastTransitionTime":"2026-02-19T15:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.981869 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.981923 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.981940 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.981966 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:15 crc kubenswrapper[4810]: I0219 15:10:15.981983 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:15Z","lastTransitionTime":"2026-02-19T15:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.085508 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.085552 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.085585 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.085607 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.085623 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:16Z","lastTransitionTime":"2026-02-19T15:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.188585 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.188639 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.188655 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.188677 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.188691 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:16Z","lastTransitionTime":"2026-02-19T15:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.291583 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.291633 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.291646 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.291666 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.291679 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:16Z","lastTransitionTime":"2026-02-19T15:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.390315 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:55:05.858331616 +0000 UTC Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.394499 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.394581 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.394609 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.394645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.394674 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:16Z","lastTransitionTime":"2026-02-19T15:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.439024 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:16 crc kubenswrapper[4810]: E0219 15:10:16.439257 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.497813 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.497875 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.497890 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.497911 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.497927 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:16Z","lastTransitionTime":"2026-02-19T15:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.600253 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.600376 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.600404 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.600439 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.600471 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:16Z","lastTransitionTime":"2026-02-19T15:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.703549 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.703587 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.703633 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.703651 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.703664 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:16Z","lastTransitionTime":"2026-02-19T15:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.806754 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.806809 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.806820 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.806844 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.806858 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:16Z","lastTransitionTime":"2026-02-19T15:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.910296 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.910418 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.910427 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.910451 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:16 crc kubenswrapper[4810]: I0219 15:10:16.910462 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:16Z","lastTransitionTime":"2026-02-19T15:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.013639 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.013725 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.013749 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.013787 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.013812 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:17Z","lastTransitionTime":"2026-02-19T15:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.116875 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.116924 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.116935 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.116953 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.116963 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:17Z","lastTransitionTime":"2026-02-19T15:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.219518 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.219570 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.219579 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.219596 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.219608 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:17Z","lastTransitionTime":"2026-02-19T15:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.322294 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.322351 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.322360 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.322378 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.322389 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:17Z","lastTransitionTime":"2026-02-19T15:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.391014 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:58:27.420958112 +0000 UTC Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.425513 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.425918 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.426018 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.426121 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.426205 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:17Z","lastTransitionTime":"2026-02-19T15:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.438960 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.439025 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.438960 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:17 crc kubenswrapper[4810]: E0219 15:10:17.439187 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:17 crc kubenswrapper[4810]: E0219 15:10:17.439276 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:17 crc kubenswrapper[4810]: E0219 15:10:17.439385 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.529753 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.529829 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.529843 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.529867 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.529885 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:17Z","lastTransitionTime":"2026-02-19T15:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.632721 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.632788 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.632809 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.632842 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.632868 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:17Z","lastTransitionTime":"2026-02-19T15:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.736835 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.736918 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.736940 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.736974 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.736997 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:17Z","lastTransitionTime":"2026-02-19T15:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.840585 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.840636 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.840655 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.840679 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.840695 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:17Z","lastTransitionTime":"2026-02-19T15:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.944481 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.944570 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.944595 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.944630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:17 crc kubenswrapper[4810]: I0219 15:10:17.944654 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:17Z","lastTransitionTime":"2026-02-19T15:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.048254 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.048314 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.048350 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.048378 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.048393 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.151530 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.151584 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.151599 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.151622 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.151635 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.254184 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.254234 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.254248 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.254267 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.254279 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.358359 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.358421 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.358439 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.358468 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.358487 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.368427 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.380263 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.391794 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.392871 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:43:22.974988894 +0000 UTC Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.411538 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.435582 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.438544 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:18 crc kubenswrapper[4810]: E0219 15:10:18.438759 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.455533 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.467734 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.467807 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.467840 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.467870 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.467889 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.475126 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.492971 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.511443 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.531767 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.553231 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.570842 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.571723 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.571772 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.571788 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.571815 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.571835 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.600668 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"ns:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:06.221628 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 15:10:06.223394 6231 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 15:10:06.223450 6231 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.621312 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.643766 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.665200 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.675227 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.675287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.675305 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.675360 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.675380 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.685561 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.707150 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.778447 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.778874 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.778973 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.779062 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.779144 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.882688 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.882753 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.882772 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.882800 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.882820 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.891225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.891296 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.891320 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.891378 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.891395 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: E0219 15:10:18.912367 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.918411 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.918763 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.918959 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.919115 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.919258 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: E0219 15:10:18.941712 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.946490 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.946537 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.946550 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.946569 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.946582 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: E0219 15:10:18.966474 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.971993 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.972039 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.972051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.972070 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.972083 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:18 crc kubenswrapper[4810]: E0219 15:10:18.991306 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:18Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.996915 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.996971 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.996990 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.997016 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:18 crc kubenswrapper[4810]: I0219 15:10:18.997039 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:18Z","lastTransitionTime":"2026-02-19T15:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:19 crc kubenswrapper[4810]: E0219 15:10:19.018503 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:19Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:19 crc kubenswrapper[4810]: E0219 15:10:19.018727 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.020838 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.020878 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.020890 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.020909 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.020922 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:19Z","lastTransitionTime":"2026-02-19T15:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.124062 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.124131 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.124147 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.124175 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.124191 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:19Z","lastTransitionTime":"2026-02-19T15:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.227466 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.227502 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.227511 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.227526 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.227536 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:19Z","lastTransitionTime":"2026-02-19T15:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.330103 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.330148 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.330161 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.330181 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.330193 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:19Z","lastTransitionTime":"2026-02-19T15:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.393665 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:30:11.192771959 +0000 UTC Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.433862 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.434260 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.434383 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.434503 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.434606 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:19Z","lastTransitionTime":"2026-02-19T15:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.439540 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.439655 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.439548 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:19 crc kubenswrapper[4810]: E0219 15:10:19.439773 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:19 crc kubenswrapper[4810]: E0219 15:10:19.439913 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:19 crc kubenswrapper[4810]: E0219 15:10:19.440041 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.538511 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.538583 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.538602 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.538628 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.538649 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:19Z","lastTransitionTime":"2026-02-19T15:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.641692 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.641772 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.641799 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.641832 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.641854 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:19Z","lastTransitionTime":"2026-02-19T15:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.744374 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.744716 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.744798 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.744879 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.744947 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:19Z","lastTransitionTime":"2026-02-19T15:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.848193 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.848615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.848832 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.849002 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.849146 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:19Z","lastTransitionTime":"2026-02-19T15:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.951629 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.951681 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.951700 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.951725 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:19 crc kubenswrapper[4810]: I0219 15:10:19.951744 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:19Z","lastTransitionTime":"2026-02-19T15:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.054919 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.054953 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.054962 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.055024 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.055034 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:20Z","lastTransitionTime":"2026-02-19T15:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.158435 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.158510 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.158536 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.158570 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.158588 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:20Z","lastTransitionTime":"2026-02-19T15:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.261887 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.261946 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.261965 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.261993 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.262013 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:20Z","lastTransitionTime":"2026-02-19T15:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.364515 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.364566 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.364582 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.364607 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.364623 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:20Z","lastTransitionTime":"2026-02-19T15:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.394299 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:22:37.417621204 +0000 UTC Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.438619 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:20 crc kubenswrapper[4810]: E0219 15:10:20.438788 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.467630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.467698 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.467709 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.467731 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.467744 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:20Z","lastTransitionTime":"2026-02-19T15:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.571104 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.571174 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.571197 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.571228 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.571251 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:20Z","lastTransitionTime":"2026-02-19T15:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.674514 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.674613 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.674632 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.674733 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.674751 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:20Z","lastTransitionTime":"2026-02-19T15:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.778007 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.778064 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.778083 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.778109 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.778129 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:20Z","lastTransitionTime":"2026-02-19T15:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.880783 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.880857 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.880871 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.880892 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.880904 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:20Z","lastTransitionTime":"2026-02-19T15:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.983538 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.983578 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.983591 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.983610 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:20 crc kubenswrapper[4810]: I0219 15:10:20.983624 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:20Z","lastTransitionTime":"2026-02-19T15:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.086731 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.086872 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.086898 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.086928 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.086950 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:21Z","lastTransitionTime":"2026-02-19T15:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.190362 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.190446 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.190483 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.190519 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.190544 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:21Z","lastTransitionTime":"2026-02-19T15:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.294043 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.294103 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.294120 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.294144 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.294165 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:21Z","lastTransitionTime":"2026-02-19T15:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.395276 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:49:36.11668155 +0000 UTC Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.397307 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.397378 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.397391 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.397411 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.397429 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:21Z","lastTransitionTime":"2026-02-19T15:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.439188 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.439355 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.439451 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:21 crc kubenswrapper[4810]: E0219 15:10:21.439483 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:21 crc kubenswrapper[4810]: E0219 15:10:21.439564 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:21 crc kubenswrapper[4810]: E0219 15:10:21.439632 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.456159 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.475890 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.494995 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.503452 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.503543 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.503567 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.503595 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.503616 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:21Z","lastTransitionTime":"2026-02-19T15:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.507251 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.519855 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.533803 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.554065 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.573681 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4287c9-a6d9-470d-820f-316c037a5d1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.596321 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.606455 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.606528 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.606544 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.606592 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.606610 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:21Z","lastTransitionTime":"2026-02-19T15:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.617996 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.632114 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.654361 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"ns:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:06.221628 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 15:10:06.223394 6231 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 15:10:06.223450 6231 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.672747 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.688417 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.701951 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.709186 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.709244 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.709262 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.709291 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.709310 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:21Z","lastTransitionTime":"2026-02-19T15:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.717529 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.734602 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:21Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.811467 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.811515 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.811526 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.811546 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.811559 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:21Z","lastTransitionTime":"2026-02-19T15:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.914450 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.914519 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.914542 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.914566 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:21 crc kubenswrapper[4810]: I0219 15:10:21.914585 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:21Z","lastTransitionTime":"2026-02-19T15:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.017309 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.017363 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.017374 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.017392 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.017409 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:22Z","lastTransitionTime":"2026-02-19T15:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.120614 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.120677 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.120692 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.120714 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.120729 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:22Z","lastTransitionTime":"2026-02-19T15:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.223199 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.223563 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.223651 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.223745 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.223814 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:22Z","lastTransitionTime":"2026-02-19T15:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.327712 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.327784 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.327802 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.327826 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.327845 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:22Z","lastTransitionTime":"2026-02-19T15:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.395501 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:35:08.287432702 +0000 UTC Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.430663 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.430716 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.430734 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.430764 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.430783 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:22Z","lastTransitionTime":"2026-02-19T15:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.438595 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:22 crc kubenswrapper[4810]: E0219 15:10:22.438784 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.440228 4810 scope.go:117] "RemoveContainer" containerID="86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.532868 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.532924 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.532941 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.532968 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.532985 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:22Z","lastTransitionTime":"2026-02-19T15:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.636634 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.636716 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.636741 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.636779 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.636806 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:22Z","lastTransitionTime":"2026-02-19T15:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.740823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.740879 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.740889 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.740912 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.740926 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:22Z","lastTransitionTime":"2026-02-19T15:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.845022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.845082 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.845101 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.845135 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.845158 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:22Z","lastTransitionTime":"2026-02-19T15:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.869377 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/1.log" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.873882 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerStarted","Data":"4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980"} Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.874707 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.890549 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.913086 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.929430 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.947853 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.947900 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.947912 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.947934 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.947948 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:22Z","lastTransitionTime":"2026-02-19T15:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.951933 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.970819 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4287c9-a6d9-470d-820f-316c037a5d1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:22 crc kubenswrapper[4810]: I0219 15:10:22.987305 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.004427 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:22Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.031785 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.048199 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.050101 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.050128 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.050139 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.050157 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.050168 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:23Z","lastTransitionTime":"2026-02-19T15:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.064590 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.087482 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.117466 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.135416 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.153058 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.153118 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.153136 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.153158 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.153174 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:23Z","lastTransitionTime":"2026-02-19T15:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.159773 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"ns:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:06.221628 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 15:10:06.223394 6231 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 15:10:06.223450 6231 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.173544 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.186896 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.201241 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.224926 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.225222 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:10:55.22517756 +0000 UTC m=+84.707207684 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.256542 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.256602 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.256620 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.256651 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.256670 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:23Z","lastTransitionTime":"2026-02-19T15:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.326557 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.326635 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.326687 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.326721 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.326773 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.326853 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.326879 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.326895 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.326861 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:55.326835226 +0000 UTC m=+84.808865360 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.326967 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:55.326935329 +0000 UTC m=+84.808965463 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.326960 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.327020 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.327060 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.327078 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.327121 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:55.327094803 +0000 UTC m=+84.809124927 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.327153 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:55.327131424 +0000 UTC m=+84.809161548 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.359414 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.359472 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.359484 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.359508 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.359523 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:23Z","lastTransitionTime":"2026-02-19T15:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.396793 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:33:00.886298481 +0000 UTC Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.428299 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.428515 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.428647 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs podName:b72d3f7a-e418-4a21-af73-6a43ce3358c1 nodeName:}" failed. No retries permitted until 2026-02-19 15:10:39.428615756 +0000 UTC m=+68.910645890 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs") pod "network-metrics-daemon-2x9v9" (UID: "b72d3f7a-e418-4a21-af73-6a43ce3358c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.439014 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.439037 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.439188 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.439359 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.439588 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.439742 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.462725 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.462791 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.462806 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.462828 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.462845 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:23Z","lastTransitionTime":"2026-02-19T15:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.565737 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.565791 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.565802 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.565823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.565837 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:23Z","lastTransitionTime":"2026-02-19T15:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.669217 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.669259 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.669269 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.669286 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.669299 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:23Z","lastTransitionTime":"2026-02-19T15:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.771951 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.772015 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.772029 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.772052 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.772067 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:23Z","lastTransitionTime":"2026-02-19T15:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.874710 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.874766 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.874776 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.874797 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.874811 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:23Z","lastTransitionTime":"2026-02-19T15:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.878982 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/2.log" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.879749 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/1.log" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.882682 4810 generic.go:334] "Generic (PLEG): container finished" podID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerID="4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980" exitCode=1 Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.882726 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980"} Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.882796 4810 scope.go:117] "RemoveContainer" containerID="86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.884651 4810 scope.go:117] "RemoveContainer" containerID="4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980" Feb 19 15:10:23 crc kubenswrapper[4810]: E0219 15:10:23.887655 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.905231 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4287c9-a6d9-470d-820f-316c037a5d1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.919737 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.930806 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.943975 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.957253 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.972124 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.979082 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.979140 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.979152 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.979174 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.979186 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:23Z","lastTransitionTime":"2026-02-19T15:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:23 crc kubenswrapper[4810]: I0219 15:10:23.988638 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:23Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.007107 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.020431 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.047830 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86c2bd7e735fa6950f8384e98fc9f9b2c030892c44b19ca0f13fd0e4bad302e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"message\\\":\\\"ns:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:06.221628 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 15:10:06.223394 6231 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 15:10:06.223450 6231 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:23Z\\\",\\\"message\\\":\\\"responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.139:17698:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:23.623215 6458 services_controller.go:443] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0219 15:10:23.623235 6458 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0219 15:10:23.623244 6458 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0219 15:10:23.623259 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.065940 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.081084 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.081122 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.081133 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.081151 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.081164 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:24Z","lastTransitionTime":"2026-02-19T15:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.082526 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.104811 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.125315 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.147855 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.171588 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.184799 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.184866 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.184882 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.184915 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.184934 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:24Z","lastTransitionTime":"2026-02-19T15:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.187081 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.288197 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.288249 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.288260 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.288282 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.288298 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:24Z","lastTransitionTime":"2026-02-19T15:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.392203 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.392256 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.392267 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.392289 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.392304 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:24Z","lastTransitionTime":"2026-02-19T15:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.397415 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:40:48.62759197 +0000 UTC Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.439272 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:24 crc kubenswrapper[4810]: E0219 15:10:24.439906 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.496055 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.496100 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.496111 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.496131 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.496145 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:24Z","lastTransitionTime":"2026-02-19T15:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.599494 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.599540 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.599549 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.599570 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.599582 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:24Z","lastTransitionTime":"2026-02-19T15:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.702762 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.702810 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.702818 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.702836 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.702846 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:24Z","lastTransitionTime":"2026-02-19T15:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.806299 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.806393 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.806408 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.806430 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.806442 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:24Z","lastTransitionTime":"2026-02-19T15:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.894085 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/2.log" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.899109 4810 scope.go:117] "RemoveContainer" containerID="4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980" Feb 19 15:10:24 crc kubenswrapper[4810]: E0219 15:10:24.899293 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.909995 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.910063 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.910084 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.910113 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.910130 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:24Z","lastTransitionTime":"2026-02-19T15:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.931268 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:23Z\\\",\\\"message\\\":\\\"responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.139:17698:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:23.623215 6458 services_controller.go:443] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0219 15:10:23.623235 6458 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0219 15:10:23.623244 6458 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0219 15:10:23.623259 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.951731 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.974844 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:24 crc kubenswrapper[4810]: I0219 15:10:24.997004 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:24Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.013943 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.014031 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.014052 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.014082 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.014099 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:25Z","lastTransitionTime":"2026-02-19T15:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.015005 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.034875 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.055313 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.075755 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.091739 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.101706 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.117258 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.117761 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.117825 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.117845 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.118283 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.118384 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:25Z","lastTransitionTime":"2026-02-19T15:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.137657 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.153077 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.169581 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.183741 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4287c9-a6d9-470d-820f-316c037a5d1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.198443 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.211475 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.221006 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.221050 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.221061 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.221081 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.221095 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:25Z","lastTransitionTime":"2026-02-19T15:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.224420 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.241274 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4287c9-a6d9-470d-820f-316c037a5d1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.257212 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.271185 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.286707 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.305385 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.324165 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.324214 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.324225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.324242 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.324252 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:25Z","lastTransitionTime":"2026-02-19T15:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.324798 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.346303 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.364028 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.375718 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.397850 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 21:41:47.072050736 +0000 UTC Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.403603 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:23Z\\\",\\\"message\\\":\\\"responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.139:17698:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:23.623215 6458 services_controller.go:443] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0219 15:10:23.623235 6458 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0219 15:10:23.623244 6458 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0219 15:10:23.623259 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.417842 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.426622 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.426663 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.426673 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.426691 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.426705 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:25Z","lastTransitionTime":"2026-02-19T15:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.430696 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.438359 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.438368 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:25 crc kubenswrapper[4810]: E0219 15:10:25.438533 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.438386 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:25 crc kubenswrapper[4810]: E0219 15:10:25.438629 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:25 crc kubenswrapper[4810]: E0219 15:10:25.438726 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.444183 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.454755 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.470247 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.482802 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.498650 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:25Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.530228 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.530289 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.530307 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.530358 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.530377 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:25Z","lastTransitionTime":"2026-02-19T15:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.633926 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.633995 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.634013 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.634046 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.634064 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:25Z","lastTransitionTime":"2026-02-19T15:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.736811 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.736876 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.736894 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.736922 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.736943 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:25Z","lastTransitionTime":"2026-02-19T15:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.840207 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.840352 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.840373 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.840400 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.840421 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:25Z","lastTransitionTime":"2026-02-19T15:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.943371 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.943424 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.943442 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.943506 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:25 crc kubenswrapper[4810]: I0219 15:10:25.943526 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:25Z","lastTransitionTime":"2026-02-19T15:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.046445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.046500 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.046515 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.046534 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.046548 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:26Z","lastTransitionTime":"2026-02-19T15:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.149885 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.149952 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.149971 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.149997 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.150016 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:26Z","lastTransitionTime":"2026-02-19T15:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.252219 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.252253 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.252262 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.252279 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.252288 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:26Z","lastTransitionTime":"2026-02-19T15:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.356519 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.356587 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.356606 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.356626 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.356639 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:26Z","lastTransitionTime":"2026-02-19T15:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.398637 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:45:25.644992925 +0000 UTC Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.438901 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:26 crc kubenswrapper[4810]: E0219 15:10:26.439228 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.459639 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.459679 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.459689 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.459706 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.459718 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:26Z","lastTransitionTime":"2026-02-19T15:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.562620 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.562678 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.562689 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.562713 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.562724 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:26Z","lastTransitionTime":"2026-02-19T15:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.666140 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.666204 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.666225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.666254 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.666274 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:26Z","lastTransitionTime":"2026-02-19T15:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.770279 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.770343 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.770355 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.770372 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.770381 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:26Z","lastTransitionTime":"2026-02-19T15:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.873375 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.873463 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.873486 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.873520 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.873543 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:26Z","lastTransitionTime":"2026-02-19T15:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.976471 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.976531 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.976549 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.976574 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:26 crc kubenswrapper[4810]: I0219 15:10:26.976594 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:26Z","lastTransitionTime":"2026-02-19T15:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.080137 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.080202 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.080228 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.080261 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.080286 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:27Z","lastTransitionTime":"2026-02-19T15:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.183833 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.183910 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.183928 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.183953 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.183966 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:27Z","lastTransitionTime":"2026-02-19T15:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.287615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.287679 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.287700 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.287726 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.287748 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:27Z","lastTransitionTime":"2026-02-19T15:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.391051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.391115 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.391135 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.391166 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.391186 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:27Z","lastTransitionTime":"2026-02-19T15:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.399211 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:11:01.269717572 +0000 UTC Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.439101 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.439132 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:27 crc kubenswrapper[4810]: E0219 15:10:27.439453 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.439151 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:27 crc kubenswrapper[4810]: E0219 15:10:27.439752 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:27 crc kubenswrapper[4810]: E0219 15:10:27.439979 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.494420 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.494481 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.494502 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.494533 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.494554 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:27Z","lastTransitionTime":"2026-02-19T15:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.597814 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.597893 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.597907 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.597928 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.597942 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:27Z","lastTransitionTime":"2026-02-19T15:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.700394 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.700449 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.700462 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.700481 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.700497 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:27Z","lastTransitionTime":"2026-02-19T15:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.803697 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.803771 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.803793 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.803820 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.803839 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:27Z","lastTransitionTime":"2026-02-19T15:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.907650 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.907717 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.907729 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.907750 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:27 crc kubenswrapper[4810]: I0219 15:10:27.907763 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:27Z","lastTransitionTime":"2026-02-19T15:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.011084 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.011158 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.011182 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.011211 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.011228 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:28Z","lastTransitionTime":"2026-02-19T15:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.114741 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.114795 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.114813 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.114841 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.114860 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:28Z","lastTransitionTime":"2026-02-19T15:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.218275 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.218351 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.218371 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.218393 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.218408 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:28Z","lastTransitionTime":"2026-02-19T15:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.321213 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.321275 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.321293 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.321321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.321381 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:28Z","lastTransitionTime":"2026-02-19T15:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.399523 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:49:09.552290039 +0000 UTC Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.423883 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.423961 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.423984 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.424011 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.424028 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:28Z","lastTransitionTime":"2026-02-19T15:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.439229 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:28 crc kubenswrapper[4810]: E0219 15:10:28.439448 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.527070 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.527146 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.527170 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.527199 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.527220 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:28Z","lastTransitionTime":"2026-02-19T15:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.630081 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.630170 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.630194 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.630225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.630245 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:28Z","lastTransitionTime":"2026-02-19T15:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.734065 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.734255 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.734278 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.734306 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.734353 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:28Z","lastTransitionTime":"2026-02-19T15:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.838279 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.838372 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.838400 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.838432 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.838452 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:28Z","lastTransitionTime":"2026-02-19T15:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.941805 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.941882 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.941901 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.941963 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:28 crc kubenswrapper[4810]: I0219 15:10:28.941982 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:28Z","lastTransitionTime":"2026-02-19T15:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.045147 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.045207 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.045225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.045252 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.045270 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.148641 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.148708 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.148726 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.148753 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.148772 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.154839 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.154930 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.154956 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.154990 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.155014 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: E0219 15:10:29.176835 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:29Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.184281 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.184426 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.184444 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.184468 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.184481 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: E0219 15:10:29.200850 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:29Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.207038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.207086 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.207095 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.207115 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.207127 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: E0219 15:10:29.226278 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:29Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.233207 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.233282 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.233303 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.233359 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.233382 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: E0219 15:10:29.247826 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:29Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.253470 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.253595 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.253656 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.253692 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.253743 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: E0219 15:10:29.269694 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:29Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:29 crc kubenswrapper[4810]: E0219 15:10:29.270159 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.272603 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.272692 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.272710 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.272762 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.272780 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.376725 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.376813 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.376853 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.376878 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.376892 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.400481 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:14:23.235806766 +0000 UTC Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.438403 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.438525 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:29 crc kubenswrapper[4810]: E0219 15:10:29.438654 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.438415 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:29 crc kubenswrapper[4810]: E0219 15:10:29.438754 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:29 crc kubenswrapper[4810]: E0219 15:10:29.439029 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.480681 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.480766 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.480837 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.480871 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.480932 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.584302 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.584408 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.584426 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.584452 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.584468 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.687280 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.687397 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.687423 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.687452 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.687469 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.790055 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.790104 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.790114 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.790132 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.790144 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.892544 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.892641 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.892667 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.892697 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.892719 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.996276 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.996391 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.996415 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.996447 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:29 crc kubenswrapper[4810]: I0219 15:10:29.996472 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:29Z","lastTransitionTime":"2026-02-19T15:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.100254 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.100304 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.100386 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.100421 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.100439 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:30Z","lastTransitionTime":"2026-02-19T15:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.204231 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.204357 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.204395 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.204426 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.204448 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:30Z","lastTransitionTime":"2026-02-19T15:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.308632 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.308686 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.308703 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.308729 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.308747 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:30Z","lastTransitionTime":"2026-02-19T15:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.401516 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:06:02.13748365 +0000 UTC Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.412511 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.412573 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.412595 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.412624 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.412642 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:30Z","lastTransitionTime":"2026-02-19T15:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.439066 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:30 crc kubenswrapper[4810]: E0219 15:10:30.439236 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.516371 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.516430 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.516452 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.516476 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.516498 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:30Z","lastTransitionTime":"2026-02-19T15:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.620373 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.620418 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.620428 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.620446 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.620460 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:30Z","lastTransitionTime":"2026-02-19T15:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.724261 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.724357 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.724374 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.724398 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.724414 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:30Z","lastTransitionTime":"2026-02-19T15:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.829319 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.829399 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.829414 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.829434 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.829445 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:30Z","lastTransitionTime":"2026-02-19T15:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.932917 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.932965 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.932984 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.933011 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:30 crc kubenswrapper[4810]: I0219 15:10:30.933028 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:30Z","lastTransitionTime":"2026-02-19T15:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.036477 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.036569 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.036598 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.036634 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.036656 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:31Z","lastTransitionTime":"2026-02-19T15:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.139868 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.139908 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.139916 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.139932 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.139945 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:31Z","lastTransitionTime":"2026-02-19T15:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.243777 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.243832 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.243851 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.243879 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.243898 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:31Z","lastTransitionTime":"2026-02-19T15:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.346286 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.346319 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.346373 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.346397 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.346411 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:31Z","lastTransitionTime":"2026-02-19T15:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.402248 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 19:23:38.445231412 +0000 UTC Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.438865 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.438931 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.439011 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:31 crc kubenswrapper[4810]: E0219 15:10:31.439055 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:31 crc kubenswrapper[4810]: E0219 15:10:31.439237 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:31 crc kubenswrapper[4810]: E0219 15:10:31.439400 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.454412 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.454509 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.454541 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.454582 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.454622 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:31Z","lastTransitionTime":"2026-02-19T15:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.464896 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:23Z\\\",\\\"message\\\":\\\"responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.139:17698:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:23.623215 6458 services_controller.go:443] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0219 15:10:23.623235 6458 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0219 15:10:23.623244 6458 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0219 15:10:23.623259 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.482794 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.501636 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.517151 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.531317 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.547435 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.560092 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.560284 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.560408 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.560510 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.560598 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:31Z","lastTransitionTime":"2026-02-19T15:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.565038 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.586193 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.610384 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.627156 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.647487 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.664145 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.664207 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.664227 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.664254 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.664273 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:31Z","lastTransitionTime":"2026-02-19T15:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.667354 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.682167 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.701021 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4287c9-a6d9-470d-820f-316c037a5d1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.716795 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.731965 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.748924 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:31Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.766922 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.767005 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.767022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.767048 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.767066 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:31Z","lastTransitionTime":"2026-02-19T15:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.869682 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.869764 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.869780 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.869802 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.869817 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:31Z","lastTransitionTime":"2026-02-19T15:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.972461 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.973024 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.973044 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.973075 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:31 crc kubenswrapper[4810]: I0219 15:10:31.973093 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:31Z","lastTransitionTime":"2026-02-19T15:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.076732 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.076823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.076881 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.076909 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.076937 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:32Z","lastTransitionTime":"2026-02-19T15:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.181222 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.181531 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.181612 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.181695 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.181797 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:32Z","lastTransitionTime":"2026-02-19T15:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.285715 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.285800 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.285845 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.285886 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.285913 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:32Z","lastTransitionTime":"2026-02-19T15:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.389475 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.389577 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.389616 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.389637 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.389654 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:32Z","lastTransitionTime":"2026-02-19T15:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.403075 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:40:42.395395938 +0000 UTC Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.438491 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:32 crc kubenswrapper[4810]: E0219 15:10:32.438692 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.492622 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.492674 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.492689 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.492710 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.492724 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:32Z","lastTransitionTime":"2026-02-19T15:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.595180 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.595257 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.595268 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.595292 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.595305 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:32Z","lastTransitionTime":"2026-02-19T15:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.706848 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.706966 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.706988 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.707023 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.707046 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:32Z","lastTransitionTime":"2026-02-19T15:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.810822 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.810894 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.810919 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.810958 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.810984 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:32Z","lastTransitionTime":"2026-02-19T15:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.915266 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.915368 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.915389 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.915414 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:32 crc kubenswrapper[4810]: I0219 15:10:32.915431 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:32Z","lastTransitionTime":"2026-02-19T15:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.018802 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.018874 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.018897 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.018928 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.018951 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:33Z","lastTransitionTime":"2026-02-19T15:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.121983 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.122043 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.122058 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.122080 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.122094 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:33Z","lastTransitionTime":"2026-02-19T15:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.225803 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.225923 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.225957 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.225990 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.226012 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:33Z","lastTransitionTime":"2026-02-19T15:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.329859 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.329943 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.330014 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.330044 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.330062 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:33Z","lastTransitionTime":"2026-02-19T15:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.404011 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:29:35.543344479 +0000 UTC Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.433148 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.433206 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.433219 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.433241 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.433255 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:33Z","lastTransitionTime":"2026-02-19T15:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.438702 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.438793 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.438702 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:33 crc kubenswrapper[4810]: E0219 15:10:33.438952 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:33 crc kubenswrapper[4810]: E0219 15:10:33.439102 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:33 crc kubenswrapper[4810]: E0219 15:10:33.439379 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.536797 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.536878 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.536907 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.536946 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.536976 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:33Z","lastTransitionTime":"2026-02-19T15:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.640473 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.640601 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.640630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.640666 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.640690 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:33Z","lastTransitionTime":"2026-02-19T15:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.744169 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.744226 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.744235 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.744258 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.744270 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:33Z","lastTransitionTime":"2026-02-19T15:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.848039 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.848120 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.848157 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.848205 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.848232 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:33Z","lastTransitionTime":"2026-02-19T15:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.951401 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.951555 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.951584 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.951615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:33 crc kubenswrapper[4810]: I0219 15:10:33.951660 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:33Z","lastTransitionTime":"2026-02-19T15:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.055317 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.055464 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.055502 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.055539 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.055564 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:34Z","lastTransitionTime":"2026-02-19T15:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.158665 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.158709 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.158719 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.158736 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.158747 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:34Z","lastTransitionTime":"2026-02-19T15:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.262272 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.262381 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.262403 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.262436 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.262459 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:34Z","lastTransitionTime":"2026-02-19T15:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.366142 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.366213 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.366231 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.366253 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.366266 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:34Z","lastTransitionTime":"2026-02-19T15:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.404965 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:05:30.523716276 +0000 UTC Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.439172 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:34 crc kubenswrapper[4810]: E0219 15:10:34.439432 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.469827 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.469879 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.469896 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.469920 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.469939 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:34Z","lastTransitionTime":"2026-02-19T15:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.573420 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.573470 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.573482 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.573502 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.573516 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:34Z","lastTransitionTime":"2026-02-19T15:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.677189 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.677274 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.677298 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.677371 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.677394 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:34Z","lastTransitionTime":"2026-02-19T15:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.863026 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.863085 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.863099 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.863126 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.863138 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:34Z","lastTransitionTime":"2026-02-19T15:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.965652 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.965705 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.965717 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.965741 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:34 crc kubenswrapper[4810]: I0219 15:10:34.965763 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:34Z","lastTransitionTime":"2026-02-19T15:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.068867 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.068924 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.068934 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.068951 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.068963 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:35Z","lastTransitionTime":"2026-02-19T15:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.172258 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.172305 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.172316 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.172377 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.172390 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:35Z","lastTransitionTime":"2026-02-19T15:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.275602 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.275697 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.275716 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.275747 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.275768 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:35Z","lastTransitionTime":"2026-02-19T15:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.379032 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.379111 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.379123 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.379146 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.379160 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:35Z","lastTransitionTime":"2026-02-19T15:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.405607 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:45:55.086092659 +0000 UTC Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.439289 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.439341 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:35 crc kubenswrapper[4810]: E0219 15:10:35.439594 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.439684 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:35 crc kubenswrapper[4810]: E0219 15:10:35.439845 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:35 crc kubenswrapper[4810]: E0219 15:10:35.439932 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.482050 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.482154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.482173 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.482198 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.482217 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:35Z","lastTransitionTime":"2026-02-19T15:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.585686 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.585735 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.585743 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.585760 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.585773 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:35Z","lastTransitionTime":"2026-02-19T15:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.690676 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.690742 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.690753 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.690773 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.690785 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:35Z","lastTransitionTime":"2026-02-19T15:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.794860 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.794949 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.794974 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.795005 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.795027 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:35Z","lastTransitionTime":"2026-02-19T15:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.897695 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.897759 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.897776 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.897803 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.897820 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:35Z","lastTransitionTime":"2026-02-19T15:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.999791 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.999827 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.999835 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.999852 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:35 crc kubenswrapper[4810]: I0219 15:10:35.999861 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:35Z","lastTransitionTime":"2026-02-19T15:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.102504 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.102540 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.102548 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.102564 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.102573 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:36Z","lastTransitionTime":"2026-02-19T15:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.204857 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.204934 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.204953 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.204979 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.205001 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:36Z","lastTransitionTime":"2026-02-19T15:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.307884 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.307928 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.307939 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.307957 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.307968 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:36Z","lastTransitionTime":"2026-02-19T15:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.406153 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:48:01.704585739 +0000 UTC Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.410845 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.410881 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.410891 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.410908 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.410918 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:36Z","lastTransitionTime":"2026-02-19T15:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.438648 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:36 crc kubenswrapper[4810]: E0219 15:10:36.438813 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.514188 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.514225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.514236 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.514253 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.514299 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:36Z","lastTransitionTime":"2026-02-19T15:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.617634 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.617687 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.617699 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.617717 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.617729 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:36Z","lastTransitionTime":"2026-02-19T15:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.720394 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.720457 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.720480 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.720511 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.720535 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:36Z","lastTransitionTime":"2026-02-19T15:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.823248 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.823306 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.823319 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.823363 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.823377 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:36Z","lastTransitionTime":"2026-02-19T15:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.926378 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.926449 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.926480 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.926510 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:36 crc kubenswrapper[4810]: I0219 15:10:36.926533 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:36Z","lastTransitionTime":"2026-02-19T15:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.028936 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.028991 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.029011 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.029038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.029056 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:37Z","lastTransitionTime":"2026-02-19T15:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.131782 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.131853 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.131876 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.131912 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.131942 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:37Z","lastTransitionTime":"2026-02-19T15:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.235214 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.235261 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.235274 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.235293 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.235307 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:37Z","lastTransitionTime":"2026-02-19T15:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.337858 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.337932 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.337957 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.337993 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.338021 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:37Z","lastTransitionTime":"2026-02-19T15:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.406664 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:25:43.244626457 +0000 UTC Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.439206 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.439476 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.439529 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:37 crc kubenswrapper[4810]: E0219 15:10:37.439741 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:37 crc kubenswrapper[4810]: E0219 15:10:37.439893 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:37 crc kubenswrapper[4810]: E0219 15:10:37.440199 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.444007 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.444145 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.444166 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.444193 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.444382 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:37Z","lastTransitionTime":"2026-02-19T15:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.469600 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.547502 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.547580 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.547603 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.547637 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.547661 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:37Z","lastTransitionTime":"2026-02-19T15:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.651187 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.651246 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.651257 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.651281 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.651294 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:37Z","lastTransitionTime":"2026-02-19T15:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.753663 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.753767 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.753792 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.753827 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.753855 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:37Z","lastTransitionTime":"2026-02-19T15:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.856978 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.857058 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.857083 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.857116 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.857141 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:37Z","lastTransitionTime":"2026-02-19T15:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.959757 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.959809 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.959822 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.959842 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:37 crc kubenswrapper[4810]: I0219 15:10:37.959858 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:37Z","lastTransitionTime":"2026-02-19T15:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.062452 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.062508 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.062525 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.062551 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.062568 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:38Z","lastTransitionTime":"2026-02-19T15:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.164787 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.164838 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.164847 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.164864 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.164876 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:38Z","lastTransitionTime":"2026-02-19T15:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.268206 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.268271 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.268288 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.268318 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.268372 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:38Z","lastTransitionTime":"2026-02-19T15:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.371238 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.371286 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.371300 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.371319 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.371348 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:38Z","lastTransitionTime":"2026-02-19T15:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.406868 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 20:25:20.238390565 +0000 UTC Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.439548 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:38 crc kubenswrapper[4810]: E0219 15:10:38.439769 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.474514 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.474574 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.474592 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.474618 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.474639 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:38Z","lastTransitionTime":"2026-02-19T15:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.577772 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.577838 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.577849 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.577873 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.577887 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:38Z","lastTransitionTime":"2026-02-19T15:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.681436 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.681502 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.681530 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.681620 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.681651 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:38Z","lastTransitionTime":"2026-02-19T15:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.784607 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.784671 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.784688 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.784738 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.784760 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:38Z","lastTransitionTime":"2026-02-19T15:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.888154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.888302 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.888568 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.888616 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.888680 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:38Z","lastTransitionTime":"2026-02-19T15:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.992362 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.992429 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.992458 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.992503 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:38 crc kubenswrapper[4810]: I0219 15:10:38.992528 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:38Z","lastTransitionTime":"2026-02-19T15:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.095259 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.095362 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.095391 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.095419 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.095437 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:39Z","lastTransitionTime":"2026-02-19T15:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.198257 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.198371 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.198404 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.198435 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.198455 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:39Z","lastTransitionTime":"2026-02-19T15:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.301715 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.301759 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.301771 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.301791 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.301809 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:39Z","lastTransitionTime":"2026-02-19T15:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.404721 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.404797 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.404814 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.404838 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.404853 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:39Z","lastTransitionTime":"2026-02-19T15:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.407978 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:06:42.182565952 +0000 UTC Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.438394 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.438452 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.438394 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:39 crc kubenswrapper[4810]: E0219 15:10:39.438632 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:39 crc kubenswrapper[4810]: E0219 15:10:39.438726 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:39 crc kubenswrapper[4810]: E0219 15:10:39.438799 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.439647 4810 scope.go:117] "RemoveContainer" containerID="4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980" Feb 19 15:10:39 crc kubenswrapper[4810]: E0219 15:10:39.439836 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.490679 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.490757 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.490781 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.490815 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.490836 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:39Z","lastTransitionTime":"2026-02-19T15:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.509579 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:39 crc kubenswrapper[4810]: E0219 15:10:39.509380 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:39 crc kubenswrapper[4810]: E0219 15:10:39.510300 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:39 crc kubenswrapper[4810]: E0219 15:10:39.510412 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs podName:b72d3f7a-e418-4a21-af73-6a43ce3358c1 nodeName:}" failed. No retries permitted until 2026-02-19 15:11:11.510376053 +0000 UTC m=+100.992406177 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs") pod "network-metrics-daemon-2x9v9" (UID: "b72d3f7a-e418-4a21-af73-6a43ce3358c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.514224 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.514299 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.514357 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.514404 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.514429 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:39Z","lastTransitionTime":"2026-02-19T15:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:39 crc kubenswrapper[4810]: E0219 15:10:39.537232 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.543585 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.543654 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.543677 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.543705 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.543732 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:39Z","lastTransitionTime":"2026-02-19T15:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:39 crc kubenswrapper[4810]: E0219 15:10:39.566601 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.571852 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.571922 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.571944 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.571978 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.572004 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:39Z","lastTransitionTime":"2026-02-19T15:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:39 crc kubenswrapper[4810]: E0219 15:10:39.593684 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.599479 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.599567 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.599592 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.599626 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.599652 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:39Z","lastTransitionTime":"2026-02-19T15:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:39 crc kubenswrapper[4810]: E0219 15:10:39.620291 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:39Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:39 crc kubenswrapper[4810]: E0219 15:10:39.620476 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.622836 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.622879 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.622892 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.622914 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.622927 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:39Z","lastTransitionTime":"2026-02-19T15:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.725739 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.725806 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.725823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.725849 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.725872 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:39Z","lastTransitionTime":"2026-02-19T15:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.829072 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.829150 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.829175 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.829209 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.829236 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:39Z","lastTransitionTime":"2026-02-19T15:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.932048 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.932091 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.932107 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.932126 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:39 crc kubenswrapper[4810]: I0219 15:10:39.932142 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:39Z","lastTransitionTime":"2026-02-19T15:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.034890 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.035222 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.035314 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.035435 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.035511 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:40Z","lastTransitionTime":"2026-02-19T15:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.138894 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.138947 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.138967 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.138994 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.139012 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:40Z","lastTransitionTime":"2026-02-19T15:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.242903 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.242965 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.243002 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.243022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.243035 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:40Z","lastTransitionTime":"2026-02-19T15:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.346826 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.346871 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.346881 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.346899 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.346910 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:40Z","lastTransitionTime":"2026-02-19T15:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.408365 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:23:11.065543924 +0000 UTC Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.439060 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:40 crc kubenswrapper[4810]: E0219 15:10:40.439284 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.450757 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.450844 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.450870 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.450904 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.450927 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:40Z","lastTransitionTime":"2026-02-19T15:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.554052 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.554129 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.554152 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.554184 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.554207 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:40Z","lastTransitionTime":"2026-02-19T15:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.658088 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.658168 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.658188 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.658216 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.658234 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:40Z","lastTransitionTime":"2026-02-19T15:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.762033 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.762118 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.762146 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.762179 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.762201 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:40Z","lastTransitionTime":"2026-02-19T15:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.864788 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.864848 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.864870 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.864898 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.864918 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:40Z","lastTransitionTime":"2026-02-19T15:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.967656 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.967715 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.967730 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.967754 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:40 crc kubenswrapper[4810]: I0219 15:10:40.967765 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:40Z","lastTransitionTime":"2026-02-19T15:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.070251 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.070355 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.070375 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.070408 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.070426 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:41Z","lastTransitionTime":"2026-02-19T15:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.173756 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.174128 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.174150 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.174171 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.174188 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:41Z","lastTransitionTime":"2026-02-19T15:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.276988 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.277038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.277051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.277071 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.277081 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:41Z","lastTransitionTime":"2026-02-19T15:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.379730 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.379798 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.379810 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.379832 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.379845 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:41Z","lastTransitionTime":"2026-02-19T15:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.409278 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:26:26.059493718 +0000 UTC Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.439017 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:41 crc kubenswrapper[4810]: E0219 15:10:41.439186 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.439554 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.439577 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:41 crc kubenswrapper[4810]: E0219 15:10:41.439757 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:41 crc kubenswrapper[4810]: E0219 15:10:41.439794 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.463521 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.479274 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.482571 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.482626 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.482636 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.482698 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.482712 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:41Z","lastTransitionTime":"2026-02-19T15:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.491862 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.511452 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:23Z\\\",\\\"message\\\":\\\"responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.139:17698:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:23.623215 6458 services_controller.go:443] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0219 15:10:23.623235 6458 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0219 15:10:23.623244 6458 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0219 15:10:23.623259 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.526065 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.539315 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.552867 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.568209 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.585408 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.585707 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.585804 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.585899 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.585255 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.585970 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:41Z","lastTransitionTime":"2026-02-19T15:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.601849 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.618239 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.629224 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.643425 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b08ecb1c-d94b-4563-8f47-7334d52bc0c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7cc216f97bd72a2d9a57e54a49676e2828ec7d0a0a035f22261c40a6976a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.656154 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.670645 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.682283 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.688822 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.688873 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.688886 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.688907 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.688921 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:41Z","lastTransitionTime":"2026-02-19T15:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.694494 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.706210 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4287c9-a6d9-470d-820f-316c037a5d1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:41Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.791830 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.791885 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.791900 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.791920 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.791930 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:41Z","lastTransitionTime":"2026-02-19T15:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.894964 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.895021 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.895067 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.895089 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.895103 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:41Z","lastTransitionTime":"2026-02-19T15:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.997850 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.997921 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.997940 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.997966 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:41 crc kubenswrapper[4810]: I0219 15:10:41.997983 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:41Z","lastTransitionTime":"2026-02-19T15:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.100770 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.100829 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.100844 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.100873 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.100890 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:42Z","lastTransitionTime":"2026-02-19T15:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.202905 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.202950 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.202962 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.202979 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.202991 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:42Z","lastTransitionTime":"2026-02-19T15:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.305885 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.305965 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.305989 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.306020 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.306039 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:42Z","lastTransitionTime":"2026-02-19T15:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.409401 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:10:05.177735383 +0000 UTC Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.409455 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.409555 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.409578 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.409610 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.409632 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:42Z","lastTransitionTime":"2026-02-19T15:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.438997 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:42 crc kubenswrapper[4810]: E0219 15:10:42.439179 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.513634 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.513701 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.513715 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.513738 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.513753 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:42Z","lastTransitionTime":"2026-02-19T15:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.617222 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.617285 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.617310 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.617348 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.617359 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:42Z","lastTransitionTime":"2026-02-19T15:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.720388 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.720438 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.720449 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.720468 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.720482 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:42Z","lastTransitionTime":"2026-02-19T15:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.823817 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.823855 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.823873 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.823891 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.823902 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:42Z","lastTransitionTime":"2026-02-19T15:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.926672 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.926707 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.926716 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.926731 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.926740 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:42Z","lastTransitionTime":"2026-02-19T15:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.964602 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bsztz_2a45a199-beeb-4972-b796-15c958fe99d3/kube-multus/0.log" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.964660 4810 generic.go:334] "Generic (PLEG): container finished" podID="2a45a199-beeb-4972-b796-15c958fe99d3" containerID="a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2" exitCode=1 Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.964691 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bsztz" event={"ID":"2a45a199-beeb-4972-b796-15c958fe99d3","Type":"ContainerDied","Data":"a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2"} Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.965120 4810 scope.go:117] "RemoveContainer" containerID="a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2" Feb 19 15:10:42 crc kubenswrapper[4810]: I0219 15:10:42.985431 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4287c9-a6d9-470d-820f-316c037a5d1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:42Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.001466 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:42Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.016745 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.030697 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.030742 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.030751 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.030774 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.030786 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:43Z","lastTransitionTime":"2026-02-19T15:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.031456 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.049754 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.068483 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.083360 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.105470 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.122834 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.134245 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.134283 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.134294 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.134311 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.134338 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:43Z","lastTransitionTime":"2026-02-19T15:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.148788 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:23Z\\\",\\\"message\\\":\\\"responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.139:17698:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:23.623215 6458 services_controller.go:443] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0219 15:10:23.623235 6458 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0219 15:10:23.623244 6458 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0219 15:10:23.623259 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.168109 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.184121 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.200629 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:42Z\\\",\\\"message\\\":\\\"2026-02-19T15:09:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e927f559-f3e4-4463-8345-ac34840adcee\\\\n2026-02-19T15:09:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e927f559-f3e4-4463-8345-ac34840adcee to /host/opt/cni/bin/\\\\n2026-02-19T15:09:57Z [verbose] multus-daemon started\\\\n2026-02-19T15:09:57Z [verbose] Readiness Indicator file check\\\\n2026-02-19T15:10:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.215820 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.226992 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b08ecb1c-d94b-4563-8f47-7334d52bc0c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7cc216f97bd72a2d9a57e54a49676e2828ec7d0a0a035f22261c40a6976a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.237154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.237182 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.237192 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.237209 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.237220 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:43Z","lastTransitionTime":"2026-02-19T15:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.245526 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.259786 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.279486 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.340489 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.340561 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.340580 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.341059 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.341116 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:43Z","lastTransitionTime":"2026-02-19T15:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.409630 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:58:31.114020175 +0000 UTC Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.439318 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.439454 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.439592 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:43 crc kubenswrapper[4810]: E0219 15:10:43.440063 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:43 crc kubenswrapper[4810]: E0219 15:10:43.439791 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:43 crc kubenswrapper[4810]: E0219 15:10:43.440502 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.444977 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.445020 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.445039 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.445065 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.445086 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:43Z","lastTransitionTime":"2026-02-19T15:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.549898 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.550285 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.550297 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.550340 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.550358 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:43Z","lastTransitionTime":"2026-02-19T15:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.653705 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.653762 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.653780 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.653809 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.653829 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:43Z","lastTransitionTime":"2026-02-19T15:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.757074 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.757138 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.757156 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.757185 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.757218 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:43Z","lastTransitionTime":"2026-02-19T15:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.860768 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.860815 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.860834 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.860859 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.860877 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:43Z","lastTransitionTime":"2026-02-19T15:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.966022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.966081 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.966099 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.966131 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.966151 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:43Z","lastTransitionTime":"2026-02-19T15:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.971993 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bsztz_2a45a199-beeb-4972-b796-15c958fe99d3/kube-multus/0.log" Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.972095 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bsztz" event={"ID":"2a45a199-beeb-4972-b796-15c958fe99d3","Type":"ContainerStarted","Data":"c1f575901c6e6174387489f4f3a7d8cfa46089db51556b6acc9245faaf36a9ca"} Feb 19 15:10:43 crc kubenswrapper[4810]: I0219 15:10:43.988911 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:43Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.009155 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.024279 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.036500 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b08ecb1c-d94b-4563-8f47-7334d52bc0c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7cc216f97bd72a2d9a57e54a49676e2828ec7d0a0a035f22261c40a6976a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.052651 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.067208 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.069491 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.069537 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.069549 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.069569 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.069583 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:44Z","lastTransitionTime":"2026-02-19T15:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.084568 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.100360 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4287c9-a6d9-470d-820f-316c037a5d1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.118451 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.132740 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.145369 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.172404 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.172474 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.172489 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.172511 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.172526 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:44Z","lastTransitionTime":"2026-02-19T15:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.176964 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:23Z\\\",\\\"message\\\":\\\"responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.139:17698:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:23.623215 6458 services_controller.go:443] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0219 15:10:23.623235 6458 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0219 15:10:23.623244 6458 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0219 15:10:23.623259 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.192562 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.211629 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.225940 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.239028 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.252400 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.269289 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f575901c6e6174387489f4f3a7d8cfa46089db51556b6acc9245faaf36a9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:42Z\\\",\\\"message\\\":\\\"2026-02-19T15:09:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e927f559-f3e4-4463-8345-ac34840adcee\\\\n2026-02-19T15:09:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e927f559-f3e4-4463-8345-ac34840adcee to /host/opt/cni/bin/\\\\n2026-02-19T15:09:57Z [verbose] multus-daemon started\\\\n2026-02-19T15:09:57Z [verbose] Readiness Indicator file check\\\\n2026-02-19T15:10:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:44Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.275253 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.275296 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.275309 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.275341 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.275354 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:44Z","lastTransitionTime":"2026-02-19T15:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.378060 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.378122 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.378139 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.378167 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.378190 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:44Z","lastTransitionTime":"2026-02-19T15:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.410380 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:06:02.100117899 +0000 UTC Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.438990 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:44 crc kubenswrapper[4810]: E0219 15:10:44.439112 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.480013 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.480075 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.480093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.480118 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.480138 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:44Z","lastTransitionTime":"2026-02-19T15:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.582963 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.583014 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.583028 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.583049 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.583061 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:44Z","lastTransitionTime":"2026-02-19T15:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.686301 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.686401 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.686420 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.686451 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.686470 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:44Z","lastTransitionTime":"2026-02-19T15:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.790152 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.790230 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.790249 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.790277 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.790299 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:44Z","lastTransitionTime":"2026-02-19T15:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.893129 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.893235 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.893260 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.893295 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.893318 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:44Z","lastTransitionTime":"2026-02-19T15:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.996826 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.996886 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.996902 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.996923 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:44 crc kubenswrapper[4810]: I0219 15:10:44.996940 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:44Z","lastTransitionTime":"2026-02-19T15:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.101128 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.101182 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.101199 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.101223 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.101237 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:45Z","lastTransitionTime":"2026-02-19T15:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.211488 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.211567 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.211591 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.211623 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.211647 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:45Z","lastTransitionTime":"2026-02-19T15:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.315096 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.315166 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.315184 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.315213 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.315234 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:45Z","lastTransitionTime":"2026-02-19T15:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.410968 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:52:45.771015556 +0000 UTC Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.419576 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.419693 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.419714 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.419743 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.419763 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:45Z","lastTransitionTime":"2026-02-19T15:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.438947 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:45 crc kubenswrapper[4810]: E0219 15:10:45.439092 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.439118 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.438944 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:45 crc kubenswrapper[4810]: E0219 15:10:45.439310 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:45 crc kubenswrapper[4810]: E0219 15:10:45.439490 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.523426 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.523466 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.523475 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.523492 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.523502 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:45Z","lastTransitionTime":"2026-02-19T15:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.626655 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.626710 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.626721 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.626739 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.626750 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:45Z","lastTransitionTime":"2026-02-19T15:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.729880 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.729951 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.729968 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.729997 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.730015 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:45Z","lastTransitionTime":"2026-02-19T15:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.834489 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.834551 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.834569 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.834598 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.834616 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:45Z","lastTransitionTime":"2026-02-19T15:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.938425 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.938517 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.938540 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.938572 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:45 crc kubenswrapper[4810]: I0219 15:10:45.938592 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:45Z","lastTransitionTime":"2026-02-19T15:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.041815 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.041872 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.041889 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.041914 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.041932 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:46Z","lastTransitionTime":"2026-02-19T15:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.145856 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.145933 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.145956 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.145985 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.146010 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:46Z","lastTransitionTime":"2026-02-19T15:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.248351 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.248434 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.248456 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.248485 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.248505 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:46Z","lastTransitionTime":"2026-02-19T15:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.351668 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.351711 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.351749 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.351772 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.351786 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:46Z","lastTransitionTime":"2026-02-19T15:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.411557 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:22:30.987652663 +0000 UTC Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.439079 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:46 crc kubenswrapper[4810]: E0219 15:10:46.439383 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.455353 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.455408 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.455424 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.455447 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.455461 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:46Z","lastTransitionTime":"2026-02-19T15:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.558977 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.559081 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.559115 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.559154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.559184 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:46Z","lastTransitionTime":"2026-02-19T15:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.662486 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.662561 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.662574 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.662600 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.662615 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:46Z","lastTransitionTime":"2026-02-19T15:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.875818 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.875922 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.875943 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.875974 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.875994 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:46Z","lastTransitionTime":"2026-02-19T15:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.979456 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.979510 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.979528 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.979553 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:46 crc kubenswrapper[4810]: I0219 15:10:46.979570 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:46Z","lastTransitionTime":"2026-02-19T15:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.083127 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.083192 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.083211 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.083238 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.083256 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:47Z","lastTransitionTime":"2026-02-19T15:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.186724 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.186794 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.186813 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.186841 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.186861 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:47Z","lastTransitionTime":"2026-02-19T15:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.290823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.290890 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.290907 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.290937 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.290955 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:47Z","lastTransitionTime":"2026-02-19T15:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.394911 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.394978 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.394994 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.395023 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.395043 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:47Z","lastTransitionTime":"2026-02-19T15:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.412410 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 10:53:57.969622518 +0000 UTC Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.438985 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.439086 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.439014 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:47 crc kubenswrapper[4810]: E0219 15:10:47.439228 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:47 crc kubenswrapper[4810]: E0219 15:10:47.439550 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:47 crc kubenswrapper[4810]: E0219 15:10:47.439637 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.498743 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.498812 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.498833 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.498855 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.498874 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:47Z","lastTransitionTime":"2026-02-19T15:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.601584 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.601650 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.601726 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.601753 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.601773 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:47Z","lastTransitionTime":"2026-02-19T15:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.705101 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.705174 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.705200 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.705235 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.705262 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:47Z","lastTransitionTime":"2026-02-19T15:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.808791 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.808856 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.808870 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.808891 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.808906 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:47Z","lastTransitionTime":"2026-02-19T15:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.911521 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.911592 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.911607 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.911630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:47 crc kubenswrapper[4810]: I0219 15:10:47.911644 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:47Z","lastTransitionTime":"2026-02-19T15:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.014307 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.014368 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.014378 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.014400 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.014410 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:48Z","lastTransitionTime":"2026-02-19T15:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.117449 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.117515 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.117531 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.117552 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.117585 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:48Z","lastTransitionTime":"2026-02-19T15:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.220941 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.221019 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.221031 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.221054 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.221067 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:48Z","lastTransitionTime":"2026-02-19T15:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.324716 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.324834 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.324853 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.324879 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.324896 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:48Z","lastTransitionTime":"2026-02-19T15:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.412987 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:54:47.930522842 +0000 UTC Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.427498 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.427544 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.427558 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.427584 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.427598 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:48Z","lastTransitionTime":"2026-02-19T15:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.438864 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:48 crc kubenswrapper[4810]: E0219 15:10:48.439042 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.531394 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.531467 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.531486 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.531520 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.531542 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:48Z","lastTransitionTime":"2026-02-19T15:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.637555 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.637622 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.637641 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.637670 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.637689 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:48Z","lastTransitionTime":"2026-02-19T15:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.740269 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.740378 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.740398 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.740426 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.740447 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:48Z","lastTransitionTime":"2026-02-19T15:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.844283 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.844371 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.844386 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.844410 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.844428 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:48Z","lastTransitionTime":"2026-02-19T15:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.947568 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.947661 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.947682 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.947709 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:48 crc kubenswrapper[4810]: I0219 15:10:48.947727 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:48Z","lastTransitionTime":"2026-02-19T15:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.051072 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.051137 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.051159 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.051214 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.051239 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.154007 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.154135 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.154155 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.154181 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.154197 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.257933 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.258030 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.258048 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.258077 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.258096 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.361281 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.361393 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.361420 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.361446 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.361462 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.413499 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:45:58.183236963 +0000 UTC Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.439004 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:49 crc kubenswrapper[4810]: E0219 15:10:49.439257 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.439043 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:49 crc kubenswrapper[4810]: E0219 15:10:49.439429 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.439035 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:49 crc kubenswrapper[4810]: E0219 15:10:49.439819 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.464460 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.464505 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.464521 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.464545 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.464561 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.567655 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.567732 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.567750 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.567779 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.567798 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.647346 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.647412 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.647425 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.647445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.647460 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: E0219 15:10:49.665419 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:49Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.669611 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.669645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.669654 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.669668 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.669678 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: E0219 15:10:49.684458 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:49Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.689816 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.689884 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.689907 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.689938 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.689959 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: E0219 15:10:49.707739 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:49Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.712601 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.712686 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.712709 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.712737 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.712760 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: E0219 15:10:49.728290 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:49Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.740071 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.740176 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.740192 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.740233 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.740252 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: E0219 15:10:49.759704 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7961bd2b-8ad7-4e25-b313-8f82bef01c62\\\",\\\"systemUUID\\\":\\\"60bcb373-142f-4da9-846e-4d055863e63a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:49Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:49 crc kubenswrapper[4810]: E0219 15:10:49.759962 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.762516 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.762564 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.762584 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.762612 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.762632 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.866020 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.866088 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.866099 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.866120 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.866134 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.969531 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.969643 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.969666 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.969697 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:49 crc kubenswrapper[4810]: I0219 15:10:49.969722 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:49Z","lastTransitionTime":"2026-02-19T15:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.072959 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.073015 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.073027 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.073049 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.073065 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:50Z","lastTransitionTime":"2026-02-19T15:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.176609 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.176693 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.176716 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.176752 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.176774 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:50Z","lastTransitionTime":"2026-02-19T15:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.279599 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.279651 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.279664 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.279685 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.279699 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:50Z","lastTransitionTime":"2026-02-19T15:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.383370 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.383425 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.383444 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.383473 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.383492 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:50Z","lastTransitionTime":"2026-02-19T15:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.414221 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:43:34.600184443 +0000 UTC Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.438687 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:50 crc kubenswrapper[4810]: E0219 15:10:50.439228 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.439572 4810 scope.go:117] "RemoveContainer" containerID="4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.487541 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.488033 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.488047 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.488067 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.488080 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:50Z","lastTransitionTime":"2026-02-19T15:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.591129 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.591198 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.591217 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.591245 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.591264 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:50Z","lastTransitionTime":"2026-02-19T15:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.694610 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.694656 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.694673 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.694698 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.694715 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:50Z","lastTransitionTime":"2026-02-19T15:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.798140 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.798203 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.798221 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.798251 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.798272 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:50Z","lastTransitionTime":"2026-02-19T15:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.901277 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.901317 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.901339 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.901356 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:50 crc kubenswrapper[4810]: I0219 15:10:50.901367 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:50Z","lastTransitionTime":"2026-02-19T15:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.000488 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/2.log" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.003133 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.003176 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.003191 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.003214 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.003231 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:51Z","lastTransitionTime":"2026-02-19T15:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.004290 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerStarted","Data":"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4"} Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.005106 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.026231 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.045169 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.066468 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.082134 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.097137 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b08ecb1c-d94b-4563-8f47-7334d52bc0c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7cc216f97bd72a2d9a57e54a49676e2828ec7d0a0a035f22261c40a6976a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.105903 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.105959 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.105981 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.106011 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.106031 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:51Z","lastTransitionTime":"2026-02-19T15:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.112545 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.128064 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.140907 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.156017 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4287c9-a6d9-470d-820f-316c037a5d1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.175065 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.199834 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.209513 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.209550 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.209568 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.209588 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.209603 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:51Z","lastTransitionTime":"2026-02-19T15:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.236375 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.262712 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:23Z\\\",\\\"message\\\":\\\"responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.139:17698:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:23.623215 6458 services_controller.go:443] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0219 15:10:23.623235 6458 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0219 15:10:23.623244 6458 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0219 15:10:23.623259 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.280312 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.294370 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.312479 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.312532 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.312543 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.312561 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.312573 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:51Z","lastTransitionTime":"2026-02-19T15:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.316124 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f575901c6e6174387489f4f3a7d8cfa46089db51556b6acc9245faaf36a9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:42Z\\\",\\\"message\\\":\\\"2026-02-19T15:09:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e927f559-f3e4-4463-8345-ac34840adcee\\\\n2026-02-19T15:09:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e927f559-f3e4-4463-8345-ac34840adcee to /host/opt/cni/bin/\\\\n2026-02-19T15:09:57Z [verbose] multus-daemon started\\\\n2026-02-19T15:09:57Z [verbose] Readiness Indicator file check\\\\n2026-02-19T15:10:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.332818 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.403107 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.414473 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:57:16.645382422 +0000 UTC Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.414792 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.414814 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.414822 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.414837 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.414847 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:51Z","lastTransitionTime":"2026-02-19T15:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.438624 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.438699 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.438624 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:51 crc kubenswrapper[4810]: E0219 15:10:51.438793 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:51 crc kubenswrapper[4810]: E0219 15:10:51.438900 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:51 crc kubenswrapper[4810]: E0219 15:10:51.439018 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.451069 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.470823 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:23Z\\\",\\\"message\\\":\\\"responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.139:17698:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:23.623215 6458 services_controller.go:443] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0219 15:10:23.623235 6458 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0219 15:10:23.623244 6458 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0219 15:10:23.623259 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.487010 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.501795 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.514432 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.517700 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.517739 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.517751 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.517770 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.517784 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:51Z","lastTransitionTime":"2026-02-19T15:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.529024 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.539670 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.552734 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f575901c6e6174387489f4f3a7d8cfa46089db51556b6acc9245faaf36a9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:42Z\\\",\\\"message\\\":\\\"2026-02-19T15:09:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e927f559-f3e4-4463-8345-ac34840adcee\\\\n2026-02-19T15:09:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e927f559-f3e4-4463-8345-ac34840adcee to /host/opt/cni/bin/\\\\n2026-02-19T15:09:57Z [verbose] multus-daemon started\\\\n2026-02-19T15:09:57Z [verbose] Readiness Indicator file check\\\\n2026-02-19T15:10:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.567568 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.585209 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.597650 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.610627 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b08ecb1c-d94b-4563-8f47-7334d52bc0c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7cc216f97bd72a2d9a57e54a49676e2828ec7d0a0a035f22261c40a6976a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.620030 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.620066 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.620079 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.620098 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.620111 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:51Z","lastTransitionTime":"2026-02-19T15:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.629772 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.640598 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.654578 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.667432 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4287c9-a6d9-470d-820f-316c037a5d1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.690149 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.722661 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.722698 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.722709 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.722727 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.722738 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:51Z","lastTransitionTime":"2026-02-19T15:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.733626 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:51Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.825717 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.825761 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.825774 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.825793 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.825806 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:51Z","lastTransitionTime":"2026-02-19T15:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.928452 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.928524 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.928547 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.928575 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:51 crc kubenswrapper[4810]: I0219 15:10:51.928594 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:51Z","lastTransitionTime":"2026-02-19T15:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.012278 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/3.log" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.014490 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/2.log" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.020006 4810 generic.go:334] "Generic (PLEG): container finished" podID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerID="824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4" exitCode=1 Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.020105 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4"} Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.020712 4810 scope.go:117] "RemoveContainer" containerID="4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.020998 4810 scope.go:117] "RemoveContainer" containerID="824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4" Feb 19 15:10:52 crc kubenswrapper[4810]: E0219 15:10:52.021198 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.031825 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.031870 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.031885 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.031909 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.031923 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:52Z","lastTransitionTime":"2026-02-19T15:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.040510 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.059842 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.071977 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.084941 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b08ecb1c-d94b-4563-8f47-7334d52bc0c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7cc216f97bd72a2d9a57e54a49676e2828ec7d0a0a035f22261c40a6976a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.103391 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.117475 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.131590 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.136104 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.136140 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.136154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.136175 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.136186 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:52Z","lastTransitionTime":"2026-02-19T15:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.148053 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4287c9-a6d9-470d-820f-316c037a5d1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.165935 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.181414 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.195947 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.216551 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bb8c0b89e0d91440beb93dc0fe78cebe4ca481940af3c8affe5efd75d589980\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:23Z\\\",\\\"message\\\":\\\"responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.139:17698:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 15:10:23.623215 6458 services_controller.go:443] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.16\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9001, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0219 15:10:23.623235 6458 services_controller.go:444] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0219 15:10:23.623244 6458 services_controller.go:445] Built service openshift-machine-config-operator/machine-config-controller LB template configs for network=default: []services.lbConfig(nil)\\\\nF0219 15:10:23.623259 6458 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:51Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:51.487438 6860 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:51.487453 6860 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 15:10:51.487498 6860 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 15:10:51.487670 6860 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 15:10:51.487715 6860 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:51.487769 6860 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:51.487918 6860 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:51.487982 6860 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:51.489213 6860 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.239796 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.239871 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.239896 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.239933 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.239957 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:52Z","lastTransitionTime":"2026-02-19T15:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.242544 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.264962 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.280159 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.295460 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.317107 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.339661 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f575901c6e6174387489f4f3a7d8cfa46089db51556b6acc9245faaf36a9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:42Z\\\",\\\"message\\\":\\\"2026-02-19T15:09:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e927f559-f3e4-4463-8345-ac34840adcee\\\\n2026-02-19T15:09:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e927f559-f3e4-4463-8345-ac34840adcee to /host/opt/cni/bin/\\\\n2026-02-19T15:09:57Z [verbose] multus-daemon started\\\\n2026-02-19T15:09:57Z [verbose] Readiness Indicator file check\\\\n2026-02-19T15:10:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:52Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.343108 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.343161 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.343173 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.343195 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.343207 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:52Z","lastTransitionTime":"2026-02-19T15:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.414791 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:53:42.729029403 +0000 UTC Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.438802 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:52 crc kubenswrapper[4810]: E0219 15:10:52.439136 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.445945 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.445987 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.445997 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.446015 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.446028 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:52Z","lastTransitionTime":"2026-02-19T15:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.549527 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.549880 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.550019 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.550208 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.550374 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:52Z","lastTransitionTime":"2026-02-19T15:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.654303 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.654404 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.654424 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.654450 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.654467 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:52Z","lastTransitionTime":"2026-02-19T15:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.757521 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.757584 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.757603 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.757630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.757649 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:52Z","lastTransitionTime":"2026-02-19T15:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.860853 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.861197 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.861406 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.861576 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.861716 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:52Z","lastTransitionTime":"2026-02-19T15:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.965316 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.965407 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.965423 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.965449 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:52 crc kubenswrapper[4810]: I0219 15:10:52.965466 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:52Z","lastTransitionTime":"2026-02-19T15:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.028594 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/3.log" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.034931 4810 scope.go:117] "RemoveContainer" containerID="824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4" Feb 19 15:10:53 crc kubenswrapper[4810]: E0219 15:10:53.035714 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.059067 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bsztz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a45a199-beeb-4972-b796-15c958fe99d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1f575901c6e6174387489f4f3a7d8cfa46089db51556b6acc9245faaf36a9ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:42Z\\\",\\\"message\\\":\\\"2026-02-19T15:09:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e927f559-f3e4-4463-8345-ac34840adcee\\\\n2026-02-19T15:09:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e927f559-f3e4-4463-8345-ac34840adcee to /host/opt/cni/bin/\\\\n2026-02-19T15:09:57Z [verbose] multus-daemon started\\\\n2026-02-19T15:09:57Z [verbose] Readiness Indicator file check\\\\n2026-02-19T15:10:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcg8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bsztz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.068716 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.068776 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.068800 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.068828 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.068851 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:53Z","lastTransitionTime":"2026-02-19T15:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.082599 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.104750 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ba23405a6182e281d7d423f13e84a03c0f68cf69169f59fe1cd0d5881103c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.125805 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6694b0d-3264-43eb-bd52-5088c7d2bf15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T15:09:51Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 15:09:51.508430 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 15:09:51.508603 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 15:09:51.509986 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1367038872/tls.crt::/tmp/serving-cert-1367038872/tls.key\\\\\\\"\\\\nI0219 15:09:51.722990 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 15:09:51.726068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 15:09:51.726086 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 15:09:51.726104 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 15:09:51.726110 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 15:09:51.731095 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 15:09:51.731120 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0219 15:09:51.731119 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 15:09:51.731126 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 15:09:51.731136 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 15:09:51.731141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 15:09:51.731145 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 15:09:51.731149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 15:09:51.734010 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.146701 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.171682 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.171778 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.171801 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.171832 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.171854 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:53Z","lastTransitionTime":"2026-02-19T15:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.174912 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"247e948b-3c17-4675-bd1c-f894b02d2817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7171cb2bba77488fbb60ec533172b2240d1871e89bc7760ae5c9b67ee6924354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cbe5a969ecab49494c22bccec30a5b7050699729a8ae6244a4bb016ed9cbe64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://395e0b7a03b78f71b2823c72ea754e1909bde653a7107597bf07b6e664ba40dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5a0126c8bea39a89fe488d49dcb1e47063abc9839d148f348a7ce592038a450\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9864da53b0851b0c4eadd5c11669af9b9e539660908ca14f44618824b5e4158\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://301c743749a2bcaee82a70d3c34a6cf13f880ef9986c390e279a7c056f9a99ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://075e06e40beb99d2338c5850692a5e89314b6342cc72007d75b602ac21a90d9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmwlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nmbsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.193028 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b72d3f7a-e418-4a21-af73-6a43ce3358c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7z7x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2x9v9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.212008 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b08ecb1c-d94b-4563-8f47-7334d52bc0c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af7cc216f97bd72a2d9a57e54a49676e2828ec7d0a0a035f22261c40a6976a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d59e5d74bcb7088e172f31cc207d421cd214b7d4a80382004fb30c167b04e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.235023 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfbf74ef-1e94-4826-8583-42b2e246ccf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f26baf8d6384c9133f7da338479eac54f1aa88609e4b0854078d4e85e8bf05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kjrkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t499d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.258268 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-t9jnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60d7228b-14bd-4988-8dca-cb89f487ba00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb03eee4bad3a037ff0d61a204c9c86a839e9d684dbb89a433a50afa9cfcfcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gzvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:55Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-t9jnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.275418 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.275493 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.275517 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.275551 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.275575 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:53Z","lastTransitionTime":"2026-02-19T15:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.282099 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06adf9a1-ec31-4acc-9864-41549913d3f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618a71f428d132079ed8a6756ea331ec8fd20c32c98bc00691f56fa0b12669aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d6389909245cbf6091dfb1ef53a497dd254228acd53c508c32c4345f86b7b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nh6qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:10:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-l2g4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.304773 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4287c9-a6d9-470d-820f-316c037a5d1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:10:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8b81d39b5b02958a8f7fa7ec86ac9dd530f2c4674042f60b11209e0400433a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d625f766dc7e2dd97fcedcfc0ca251971bc0be7e0d04ed4b2bcf3939905b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81860198cc657673df0021324ba4fde92aa6ffbde974993d0553a080872f7fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb662772f98f52da96cf3b347f7b4990f4daa4552da61d972ecb8875d74b01c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.328236 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.353611 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9db08d13283889d8c43ab89157e2c33f300359ecf66b548100b30ddea3da2be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8472b62f0ac04884cd8a9d342c063860188b0f17e1ed2201c7ea7288fb44d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.370702 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-flbx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cc3ea69-b881-4fd4-ad4d-42803f27865b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07b26bfbc15981836877e6ddc6aea3fb26a4df1be38ac67c76f8a7a8f6b84b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqph9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-flbx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.377931 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.377965 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.377980 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.378000 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.378016 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:53Z","lastTransitionTime":"2026-02-19T15:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.404558 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T15:10:51Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:51.487438 6860 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:51.487453 6860 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 15:10:51.487498 6860 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 15:10:51.487670 6860 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 15:10:51.487715 6860 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:51.487769 6860 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:51.487918 6860 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:51.487982 6860 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 15:10:51.489213 6860 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T15:10:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T15:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T15:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7xqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8k7p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.415778 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:38:13.206014172 +0000 UTC Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.425315 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d74a04f-ffb6-47fb-8823-2fd7f3df4f32\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2070ce19c777ad791dfd3e5d99eded43e479f4cbbaefe96c5ed8525aa86e5ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5559fbe43f7e18617e2a46182d621a6310ae4d0b10548bae2bb098588285e081\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbf8815e560483199cc8f453952895d3e5d831e7c52671b204187dc686fa4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T15:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.439500 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.439524 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:53 crc kubenswrapper[4810]: E0219 15:10:53.439705 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.439809 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:53 crc kubenswrapper[4810]: E0219 15:10:53.439965 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:53 crc kubenswrapper[4810]: E0219 15:10:53.440124 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.449321 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T15:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60fe11e60f880bde640f63640fdc6a1b66e3044dcd975c711c45a15aed3fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T15:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T15:10:53Z is after 2025-08-24T17:21:41Z" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.481516 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.481576 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.481589 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.481609 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.481623 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:53Z","lastTransitionTime":"2026-02-19T15:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.585249 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.585318 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.585385 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.585442 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.585461 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:53Z","lastTransitionTime":"2026-02-19T15:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.688476 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.688555 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.688574 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.688601 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.688621 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:53Z","lastTransitionTime":"2026-02-19T15:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.792345 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.792453 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.792469 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.792490 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.792504 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:53Z","lastTransitionTime":"2026-02-19T15:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.895163 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.895240 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.895299 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.895344 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.895361 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:53Z","lastTransitionTime":"2026-02-19T15:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.998144 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.998218 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.998238 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.998271 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:53 crc kubenswrapper[4810]: I0219 15:10:53.998291 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:53Z","lastTransitionTime":"2026-02-19T15:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.101638 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.101704 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.101722 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.101751 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.101769 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:54Z","lastTransitionTime":"2026-02-19T15:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.204928 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.204966 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.204977 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.204994 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.205006 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:54Z","lastTransitionTime":"2026-02-19T15:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.308396 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.308481 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.308513 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.308541 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.308554 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:54Z","lastTransitionTime":"2026-02-19T15:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.412210 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.412290 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.412309 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.412381 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.412405 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:54Z","lastTransitionTime":"2026-02-19T15:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.416407 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:03:54.26117333 +0000 UTC Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.438947 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:54 crc kubenswrapper[4810]: E0219 15:10:54.439280 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.515488 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.515557 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.515579 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.515609 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.515631 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:54Z","lastTransitionTime":"2026-02-19T15:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.619396 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.619466 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.619485 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.619513 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.619533 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:54Z","lastTransitionTime":"2026-02-19T15:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.722615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.722683 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.722708 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.722745 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.722774 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:54Z","lastTransitionTime":"2026-02-19T15:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.826441 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.826502 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.826519 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.826547 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.826565 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:54Z","lastTransitionTime":"2026-02-19T15:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.929779 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.929854 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.929871 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.929903 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:54 crc kubenswrapper[4810]: I0219 15:10:54.929924 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:54Z","lastTransitionTime":"2026-02-19T15:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.033191 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.033268 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.033286 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.033315 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.033371 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:55Z","lastTransitionTime":"2026-02-19T15:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.136630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.136679 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.136695 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.136720 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.136737 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:55Z","lastTransitionTime":"2026-02-19T15:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.239692 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.240269 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.240455 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.240615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.240782 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:55Z","lastTransitionTime":"2026-02-19T15:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.268448 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.268639 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.268600186 +0000 UTC m=+148.750630350 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.343934 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.343987 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.344003 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.344028 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.344045 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:55Z","lastTransitionTime":"2026-02-19T15:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.369728 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.370000 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.370047 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.370072 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.370159 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.370125447 +0000 UTC m=+148.852155621 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.370300 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.370386 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.370410 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.370542 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.370513946 +0000 UTC m=+148.852544100 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.370732 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.370931 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.371061 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.371142 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.371120931 +0000 UTC m=+148.853151095 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.371515 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.371612 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.371904 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.37187856 +0000 UTC m=+148.853908714 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.417058 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:21:41.604414512 +0000 UTC Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.438404 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.438484 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.438404 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.438587 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.438717 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:55 crc kubenswrapper[4810]: E0219 15:10:55.438785 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.446051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.446254 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.446388 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.446556 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.446689 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:55Z","lastTransitionTime":"2026-02-19T15:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.550238 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.550295 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.550318 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.550409 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.550435 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:55Z","lastTransitionTime":"2026-02-19T15:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.654291 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.654455 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.654480 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.654518 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.654545 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:55Z","lastTransitionTime":"2026-02-19T15:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.757538 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.757625 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.757651 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.757685 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.757708 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:55Z","lastTransitionTime":"2026-02-19T15:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.861741 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.862108 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.862233 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.862352 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.862451 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:55Z","lastTransitionTime":"2026-02-19T15:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.964558 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.964628 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.964647 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.964676 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:55 crc kubenswrapper[4810]: I0219 15:10:55.964695 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:55Z","lastTransitionTime":"2026-02-19T15:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.068073 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.068141 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.068154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.068189 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.068204 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:56Z","lastTransitionTime":"2026-02-19T15:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.171989 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.172048 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.172065 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.172089 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.172104 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:56Z","lastTransitionTime":"2026-02-19T15:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.275436 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.275509 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.275527 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.275558 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.275578 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:56Z","lastTransitionTime":"2026-02-19T15:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.379080 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.379187 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.379208 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.379236 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.379257 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:56Z","lastTransitionTime":"2026-02-19T15:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.417862 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:53:57.638221094 +0000 UTC Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.438598 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:56 crc kubenswrapper[4810]: E0219 15:10:56.439059 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.481934 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.481991 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.482006 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.482031 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.482048 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:56Z","lastTransitionTime":"2026-02-19T15:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.585772 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.585881 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.585907 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.585939 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.585965 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:56Z","lastTransitionTime":"2026-02-19T15:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.688426 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.688500 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.688521 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.688549 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.688567 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:56Z","lastTransitionTime":"2026-02-19T15:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.791208 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.791282 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.791303 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.791345 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.791358 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:56Z","lastTransitionTime":"2026-02-19T15:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.894122 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.894205 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.894224 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.894255 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.894280 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:56Z","lastTransitionTime":"2026-02-19T15:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.997814 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.997895 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.997916 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.997946 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:56 crc kubenswrapper[4810]: I0219 15:10:56.997965 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:56Z","lastTransitionTime":"2026-02-19T15:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.100463 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.100553 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.100585 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.100621 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.100644 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:57Z","lastTransitionTime":"2026-02-19T15:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.203725 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.203812 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.203834 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.203868 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.203896 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:57Z","lastTransitionTime":"2026-02-19T15:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.306649 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.306757 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.306776 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.306804 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.306821 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:57Z","lastTransitionTime":"2026-02-19T15:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.410233 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.410285 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.410295 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.410314 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.410345 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:57Z","lastTransitionTime":"2026-02-19T15:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.418505 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:12:12.360329026 +0000 UTC Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.439758 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.439879 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:57 crc kubenswrapper[4810]: E0219 15:10:57.439977 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.440104 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:57 crc kubenswrapper[4810]: E0219 15:10:57.440094 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:57 crc kubenswrapper[4810]: E0219 15:10:57.440216 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.464781 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.513044 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.513110 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.513121 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.513142 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.513154 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:57Z","lastTransitionTime":"2026-02-19T15:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.616351 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.616437 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.616448 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.616468 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.616480 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:57Z","lastTransitionTime":"2026-02-19T15:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.719885 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.719963 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.719985 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.720015 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.720038 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:57Z","lastTransitionTime":"2026-02-19T15:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.823710 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.823765 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.823780 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.823802 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.823818 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:57Z","lastTransitionTime":"2026-02-19T15:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.926341 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.926390 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.926404 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.926423 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:57 crc kubenswrapper[4810]: I0219 15:10:57.926435 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:57Z","lastTransitionTime":"2026-02-19T15:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.030199 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.030239 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.030248 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.030264 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.030275 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:58Z","lastTransitionTime":"2026-02-19T15:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.133087 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.133613 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.133626 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.133647 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.133663 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:58Z","lastTransitionTime":"2026-02-19T15:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.236548 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.236591 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.236610 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.236633 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.236649 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:58Z","lastTransitionTime":"2026-02-19T15:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.339206 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.339266 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.339282 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.339300 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.339311 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:58Z","lastTransitionTime":"2026-02-19T15:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.419915 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:04:07.677122347 +0000 UTC Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.438385 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:10:58 crc kubenswrapper[4810]: E0219 15:10:58.438562 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.442242 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.442292 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.442303 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.442321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.442364 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:58Z","lastTransitionTime":"2026-02-19T15:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.544357 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.544620 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.544859 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.545013 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.545223 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:58Z","lastTransitionTime":"2026-02-19T15:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.649006 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.649238 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.649437 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.649618 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.649761 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:58Z","lastTransitionTime":"2026-02-19T15:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.752492 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.752549 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.752566 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.752653 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.752672 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:58Z","lastTransitionTime":"2026-02-19T15:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.856196 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.856261 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.856279 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.856305 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.856366 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:58Z","lastTransitionTime":"2026-02-19T15:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.959346 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.959412 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.959431 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.959458 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:58 crc kubenswrapper[4810]: I0219 15:10:58.959476 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:58Z","lastTransitionTime":"2026-02-19T15:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.062255 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.062668 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.062784 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.062875 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.062969 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:59Z","lastTransitionTime":"2026-02-19T15:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.166554 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.166929 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.167145 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.167305 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.167489 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:59Z","lastTransitionTime":"2026-02-19T15:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.271167 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.271257 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.271292 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.271352 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.271377 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:59Z","lastTransitionTime":"2026-02-19T15:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.374729 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.375197 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.375369 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.375528 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.375668 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:59Z","lastTransitionTime":"2026-02-19T15:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.420817 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:23:05.403484802 +0000 UTC Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.439008 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:10:59 crc kubenswrapper[4810]: E0219 15:10:59.439569 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.439635 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.439661 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:10:59 crc kubenswrapper[4810]: E0219 15:10:59.440061 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:10:59 crc kubenswrapper[4810]: E0219 15:10:59.440076 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.478897 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.478978 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.479001 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.479032 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.479054 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:59Z","lastTransitionTime":"2026-02-19T15:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.582975 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.583043 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.583064 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.583094 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.583121 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:59Z","lastTransitionTime":"2026-02-19T15:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.687055 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.687121 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.687134 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.687160 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.687173 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:59Z","lastTransitionTime":"2026-02-19T15:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.790762 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.790845 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.790868 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.790896 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.790917 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:59Z","lastTransitionTime":"2026-02-19T15:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.894441 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.894498 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.894510 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.894532 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.894545 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:59Z","lastTransitionTime":"2026-02-19T15:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.914161 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.914241 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.914260 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.914292 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.914313 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T15:10:59Z","lastTransitionTime":"2026-02-19T15:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.981212 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc"] Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.981996 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.984699 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.984813 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.984873 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 15:10:59 crc kubenswrapper[4810]: I0219 15:10:59.986456 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.028304 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e566cb6c-c157-4186-91e6-a0474949d42e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.028483 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e566cb6c-c157-4186-91e6-a0474949d42e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.028521 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e566cb6c-c157-4186-91e6-a0474949d42e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.028593 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e566cb6c-c157-4186-91e6-a0474949d42e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.028627 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e566cb6c-c157-4186-91e6-a0474949d42e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.040739 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.040698574 podStartE2EDuration="1m8.040698574s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:00.011478881 +0000 UTC m=+89.493509045" watchObservedRunningTime="2026-02-19 15:11:00.040698574 +0000 UTC m=+89.522728748" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.123652 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-flbx5" podStartSLOduration=68.123626605 podStartE2EDuration="1m8.123626605s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:00.08540758 +0000 UTC m=+89.567437704" watchObservedRunningTime="2026-02-19 15:11:00.123626605 +0000 UTC m=+89.605656739" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.129402 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e566cb6c-c157-4186-91e6-a0474949d42e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.129469 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e566cb6c-c157-4186-91e6-a0474949d42e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.129541 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e566cb6c-c157-4186-91e6-a0474949d42e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.129572 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e566cb6c-c157-4186-91e6-a0474949d42e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.129614 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e566cb6c-c157-4186-91e6-a0474949d42e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.129662 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e566cb6c-c157-4186-91e6-a0474949d42e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.129737 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e566cb6c-c157-4186-91e6-a0474949d42e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.130456 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e566cb6c-c157-4186-91e6-a0474949d42e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.139717 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e566cb6c-c157-4186-91e6-a0474949d42e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.150749 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e566cb6c-c157-4186-91e6-a0474949d42e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r8wfc\" (UID: \"e566cb6c-c157-4186-91e6-a0474949d42e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.190789 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bsztz" podStartSLOduration=68.190757505 podStartE2EDuration="1m8.190757505s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:00.177469127 +0000 UTC m=+89.659499281" watchObservedRunningTime="2026-02-19 15:11:00.190757505 +0000 UTC m=+89.672787659" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.222895 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.222868839 podStartE2EDuration="3.222868839s" podCreationTimestamp="2026-02-19 15:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:00.220638694 +0000 UTC m=+89.702668838" watchObservedRunningTime="2026-02-19 15:11:00.222868839 +0000 UTC m=+89.704898973" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.223589 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.223578717 podStartE2EDuration="23.223578717s" podCreationTimestamp="2026-02-19 15:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:00.191279928 +0000 UTC m=+89.673310092" watchObservedRunningTime="2026-02-19 15:11:00.223578717 +0000 UTC m=+89.705608851" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.250289 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.250263327 podStartE2EDuration="1m8.250263327s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:00.249298223 +0000 UTC m=+89.731328377" watchObservedRunningTime="2026-02-19 15:11:00.250263327 +0000 UTC m=+89.732293461" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.295586 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-nmbsx" podStartSLOduration=67.295553877 podStartE2EDuration="1m7.295553877s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:00.293957338 +0000 UTC m=+89.775987472" watchObservedRunningTime="2026-02-19 15:11:00.295553877 +0000 UTC m=+89.777584021" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.299698 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" Feb 19 15:11:00 crc kubenswrapper[4810]: W0219 15:11:00.329864 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode566cb6c_c157_4186_91e6_a0474949d42e.slice/crio-7dae6c7b79931e41fae8b7e491d59a8e9eeeaa3cfdbd5c988f0ccc51e00792fc WatchSource:0}: Error finding container 7dae6c7b79931e41fae8b7e491d59a8e9eeeaa3cfdbd5c988f0ccc51e00792fc: Status 404 returned error can't find the container with id 7dae6c7b79931e41fae8b7e491d59a8e9eeeaa3cfdbd5c988f0ccc51e00792fc Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.347181 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.347154383 podStartE2EDuration="42.347154383s" podCreationTimestamp="2026-02-19 15:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:00.346131258 +0000 UTC m=+89.828161392" watchObservedRunningTime="2026-02-19 15:11:00.347154383 +0000 UTC m=+89.829184517" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.382881 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podStartSLOduration=68.382861066 podStartE2EDuration="1m8.382861066s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:00.382344283 +0000 UTC m=+89.864374437" watchObservedRunningTime="2026-02-19 15:11:00.382861066 +0000 UTC m=+89.864891200" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.398718 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-t9jnq" podStartSLOduration=68.398698248 podStartE2EDuration="1m8.398698248s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:00.397852627 +0000 UTC m=+89.879882751" watchObservedRunningTime="2026-02-19 15:11:00.398698248 +0000 UTC m=+89.880728372" Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.421035 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 12:54:59.615872567 +0000 UTC Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.421112 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.427817 4810 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 15:11:00 crc kubenswrapper[4810]: I0219 15:11:00.438426 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:00 crc kubenswrapper[4810]: E0219 15:11:00.438675 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:01 crc kubenswrapper[4810]: I0219 15:11:01.063577 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" event={"ID":"e566cb6c-c157-4186-91e6-a0474949d42e","Type":"ContainerStarted","Data":"482b0b1df225989bc6d0b54aa328c18be90fb10c12be5eb9f4dc2a9771aa5312"} Feb 19 15:11:01 crc kubenswrapper[4810]: I0219 15:11:01.063964 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" event={"ID":"e566cb6c-c157-4186-91e6-a0474949d42e","Type":"ContainerStarted","Data":"7dae6c7b79931e41fae8b7e491d59a8e9eeeaa3cfdbd5c988f0ccc51e00792fc"} Feb 19 15:11:01 crc kubenswrapper[4810]: I0219 15:11:01.083906 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-l2g4t" podStartSLOduration=68.083867492 podStartE2EDuration="1m8.083867492s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:00.414859628 +0000 UTC m=+89.896889752" watchObservedRunningTime="2026-02-19 15:11:01.083867492 +0000 UTC m=+90.565897626" Feb 19 15:11:01 crc kubenswrapper[4810]: I0219 15:11:01.084098 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r8wfc" podStartSLOduration=69.084092447 podStartE2EDuration="1m9.084092447s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:01.083618975 +0000 UTC m=+90.565649139" watchObservedRunningTime="2026-02-19 15:11:01.084092447 +0000 UTC m=+90.566122581" Feb 19 15:11:01 crc kubenswrapper[4810]: I0219 15:11:01.438901 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:01 crc kubenswrapper[4810]: I0219 15:11:01.439031 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:01 crc kubenswrapper[4810]: I0219 15:11:01.438912 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:01 crc kubenswrapper[4810]: E0219 15:11:01.441175 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:01 crc kubenswrapper[4810]: E0219 15:11:01.441397 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:01 crc kubenswrapper[4810]: E0219 15:11:01.441571 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:02 crc kubenswrapper[4810]: I0219 15:11:02.438456 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:02 crc kubenswrapper[4810]: E0219 15:11:02.439010 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:03 crc kubenswrapper[4810]: I0219 15:11:03.438733 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:03 crc kubenswrapper[4810]: I0219 15:11:03.438832 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:03 crc kubenswrapper[4810]: I0219 15:11:03.438844 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:03 crc kubenswrapper[4810]: E0219 15:11:03.439173 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:03 crc kubenswrapper[4810]: E0219 15:11:03.439472 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:03 crc kubenswrapper[4810]: E0219 15:11:03.439734 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:04 crc kubenswrapper[4810]: I0219 15:11:04.438948 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:04 crc kubenswrapper[4810]: E0219 15:11:04.439115 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:05 crc kubenswrapper[4810]: I0219 15:11:05.438970 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:05 crc kubenswrapper[4810]: I0219 15:11:05.438996 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:05 crc kubenswrapper[4810]: I0219 15:11:05.439376 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:05 crc kubenswrapper[4810]: E0219 15:11:05.439678 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:05 crc kubenswrapper[4810]: E0219 15:11:05.439839 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:05 crc kubenswrapper[4810]: E0219 15:11:05.440100 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:06 crc kubenswrapper[4810]: I0219 15:11:06.438270 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:06 crc kubenswrapper[4810]: E0219 15:11:06.438496 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:07 crc kubenswrapper[4810]: I0219 15:11:07.438999 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:07 crc kubenswrapper[4810]: I0219 15:11:07.439051 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:07 crc kubenswrapper[4810]: I0219 15:11:07.438990 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:07 crc kubenswrapper[4810]: E0219 15:11:07.439194 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:07 crc kubenswrapper[4810]: E0219 15:11:07.439791 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:07 crc kubenswrapper[4810]: E0219 15:11:07.439901 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:07 crc kubenswrapper[4810]: I0219 15:11:07.440341 4810 scope.go:117] "RemoveContainer" containerID="824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4" Feb 19 15:11:07 crc kubenswrapper[4810]: E0219 15:11:07.440580 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" Feb 19 15:11:08 crc kubenswrapper[4810]: I0219 15:11:08.438814 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:08 crc kubenswrapper[4810]: E0219 15:11:08.439107 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:09 crc kubenswrapper[4810]: I0219 15:11:09.438599 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:09 crc kubenswrapper[4810]: I0219 15:11:09.438708 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:09 crc kubenswrapper[4810]: I0219 15:11:09.438612 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:09 crc kubenswrapper[4810]: E0219 15:11:09.438795 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:09 crc kubenswrapper[4810]: E0219 15:11:09.438947 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:09 crc kubenswrapper[4810]: E0219 15:11:09.439143 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:10 crc kubenswrapper[4810]: I0219 15:11:10.439484 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:10 crc kubenswrapper[4810]: E0219 15:11:10.439786 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:11 crc kubenswrapper[4810]: I0219 15:11:11.439286 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:11 crc kubenswrapper[4810]: I0219 15:11:11.439300 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:11 crc kubenswrapper[4810]: I0219 15:11:11.439113 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:11 crc kubenswrapper[4810]: E0219 15:11:11.441800 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:11 crc kubenswrapper[4810]: E0219 15:11:11.442169 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:11 crc kubenswrapper[4810]: E0219 15:11:11.442218 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:11 crc kubenswrapper[4810]: I0219 15:11:11.564576 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:11 crc kubenswrapper[4810]: E0219 15:11:11.564739 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:11:11 crc kubenswrapper[4810]: E0219 15:11:11.564794 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs podName:b72d3f7a-e418-4a21-af73-6a43ce3358c1 nodeName:}" failed. No retries permitted until 2026-02-19 15:12:15.564775924 +0000 UTC m=+165.046806048 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs") pod "network-metrics-daemon-2x9v9" (UID: "b72d3f7a-e418-4a21-af73-6a43ce3358c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 15:11:12 crc kubenswrapper[4810]: I0219 15:11:12.438615 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:12 crc kubenswrapper[4810]: E0219 15:11:12.438812 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:13 crc kubenswrapper[4810]: I0219 15:11:13.438877 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:13 crc kubenswrapper[4810]: I0219 15:11:13.439073 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:13 crc kubenswrapper[4810]: I0219 15:11:13.439119 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:13 crc kubenswrapper[4810]: E0219 15:11:13.439409 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:13 crc kubenswrapper[4810]: E0219 15:11:13.439591 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:13 crc kubenswrapper[4810]: E0219 15:11:13.439864 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:14 crc kubenswrapper[4810]: I0219 15:11:14.438685 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:14 crc kubenswrapper[4810]: E0219 15:11:14.438895 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:15 crc kubenswrapper[4810]: I0219 15:11:15.438758 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:15 crc kubenswrapper[4810]: I0219 15:11:15.438869 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:15 crc kubenswrapper[4810]: I0219 15:11:15.438990 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:15 crc kubenswrapper[4810]: E0219 15:11:15.439053 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:15 crc kubenswrapper[4810]: E0219 15:11:15.439251 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:15 crc kubenswrapper[4810]: E0219 15:11:15.439441 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:16 crc kubenswrapper[4810]: I0219 15:11:16.438598 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:16 crc kubenswrapper[4810]: E0219 15:11:16.438817 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:17 crc kubenswrapper[4810]: I0219 15:11:17.438879 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:17 crc kubenswrapper[4810]: I0219 15:11:17.438936 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:17 crc kubenswrapper[4810]: E0219 15:11:17.439157 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:17 crc kubenswrapper[4810]: E0219 15:11:17.439251 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:17 crc kubenswrapper[4810]: I0219 15:11:17.439630 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:17 crc kubenswrapper[4810]: E0219 15:11:17.439886 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:18 crc kubenswrapper[4810]: I0219 15:11:18.438447 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:18 crc kubenswrapper[4810]: E0219 15:11:18.438673 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:19 crc kubenswrapper[4810]: I0219 15:11:19.439383 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:19 crc kubenswrapper[4810]: I0219 15:11:19.439426 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:19 crc kubenswrapper[4810]: E0219 15:11:19.439607 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:19 crc kubenswrapper[4810]: I0219 15:11:19.439646 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:19 crc kubenswrapper[4810]: E0219 15:11:19.439959 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:19 crc kubenswrapper[4810]: E0219 15:11:19.440461 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:20 crc kubenswrapper[4810]: I0219 15:11:20.438606 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:20 crc kubenswrapper[4810]: E0219 15:11:20.438784 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:21 crc kubenswrapper[4810]: I0219 15:11:21.438526 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:21 crc kubenswrapper[4810]: I0219 15:11:21.438666 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:21 crc kubenswrapper[4810]: I0219 15:11:21.438542 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:21 crc kubenswrapper[4810]: E0219 15:11:21.438817 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:21 crc kubenswrapper[4810]: E0219 15:11:21.438995 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:21 crc kubenswrapper[4810]: E0219 15:11:21.439159 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:22 crc kubenswrapper[4810]: I0219 15:11:22.438859 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:22 crc kubenswrapper[4810]: E0219 15:11:22.439521 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:22 crc kubenswrapper[4810]: I0219 15:11:22.439919 4810 scope.go:117] "RemoveContainer" containerID="824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4" Feb 19 15:11:22 crc kubenswrapper[4810]: E0219 15:11:22.440153 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8k7p5_openshift-ovn-kubernetes(c5a8a15c-53e8-4868-8feb-dcd4e83939a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" Feb 19 15:11:23 crc kubenswrapper[4810]: I0219 15:11:23.439123 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:23 crc kubenswrapper[4810]: I0219 15:11:23.439208 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:23 crc kubenswrapper[4810]: E0219 15:11:23.439350 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:23 crc kubenswrapper[4810]: I0219 15:11:23.439413 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:23 crc kubenswrapper[4810]: E0219 15:11:23.439500 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:23 crc kubenswrapper[4810]: E0219 15:11:23.439586 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:24 crc kubenswrapper[4810]: I0219 15:11:24.439212 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:24 crc kubenswrapper[4810]: E0219 15:11:24.439973 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:25 crc kubenswrapper[4810]: I0219 15:11:25.438941 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:25 crc kubenswrapper[4810]: I0219 15:11:25.438982 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:25 crc kubenswrapper[4810]: I0219 15:11:25.439209 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:25 crc kubenswrapper[4810]: E0219 15:11:25.439362 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:25 crc kubenswrapper[4810]: E0219 15:11:25.439752 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:25 crc kubenswrapper[4810]: E0219 15:11:25.439979 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:26 crc kubenswrapper[4810]: I0219 15:11:26.438818 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:26 crc kubenswrapper[4810]: E0219 15:11:26.439027 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:27 crc kubenswrapper[4810]: I0219 15:11:27.439093 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:27 crc kubenswrapper[4810]: I0219 15:11:27.439180 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:27 crc kubenswrapper[4810]: I0219 15:11:27.439111 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:27 crc kubenswrapper[4810]: E0219 15:11:27.439374 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:27 crc kubenswrapper[4810]: E0219 15:11:27.439510 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:27 crc kubenswrapper[4810]: E0219 15:11:27.439714 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:28 crc kubenswrapper[4810]: I0219 15:11:28.438695 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:28 crc kubenswrapper[4810]: E0219 15:11:28.439018 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:29 crc kubenswrapper[4810]: I0219 15:11:29.177770 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bsztz_2a45a199-beeb-4972-b796-15c958fe99d3/kube-multus/1.log" Feb 19 15:11:29 crc kubenswrapper[4810]: I0219 15:11:29.178433 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bsztz_2a45a199-beeb-4972-b796-15c958fe99d3/kube-multus/0.log" Feb 19 15:11:29 crc kubenswrapper[4810]: I0219 15:11:29.178492 4810 generic.go:334] "Generic (PLEG): container finished" podID="2a45a199-beeb-4972-b796-15c958fe99d3" containerID="c1f575901c6e6174387489f4f3a7d8cfa46089db51556b6acc9245faaf36a9ca" exitCode=1 Feb 19 15:11:29 crc kubenswrapper[4810]: I0219 15:11:29.178533 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bsztz" event={"ID":"2a45a199-beeb-4972-b796-15c958fe99d3","Type":"ContainerDied","Data":"c1f575901c6e6174387489f4f3a7d8cfa46089db51556b6acc9245faaf36a9ca"} Feb 19 15:11:29 crc kubenswrapper[4810]: I0219 15:11:29.178586 4810 scope.go:117] "RemoveContainer" containerID="a674e283274f1e77f6ac89a8170137bf5852153bd4a35a80101a3cd0090b85d2" Feb 19 15:11:29 crc kubenswrapper[4810]: I0219 15:11:29.179115 4810 scope.go:117] "RemoveContainer" containerID="c1f575901c6e6174387489f4f3a7d8cfa46089db51556b6acc9245faaf36a9ca" Feb 19 15:11:29 crc kubenswrapper[4810]: E0219 15:11:29.179355 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-bsztz_openshift-multus(2a45a199-beeb-4972-b796-15c958fe99d3)\"" pod="openshift-multus/multus-bsztz" podUID="2a45a199-beeb-4972-b796-15c958fe99d3" Feb 19 15:11:29 crc kubenswrapper[4810]: I0219 15:11:29.438832 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:29 crc kubenswrapper[4810]: I0219 15:11:29.438869 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:29 crc kubenswrapper[4810]: I0219 15:11:29.438933 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:29 crc kubenswrapper[4810]: E0219 15:11:29.439012 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:29 crc kubenswrapper[4810]: E0219 15:11:29.439195 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:29 crc kubenswrapper[4810]: E0219 15:11:29.439403 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:30 crc kubenswrapper[4810]: I0219 15:11:30.184456 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bsztz_2a45a199-beeb-4972-b796-15c958fe99d3/kube-multus/1.log" Feb 19 15:11:30 crc kubenswrapper[4810]: I0219 15:11:30.439021 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:30 crc kubenswrapper[4810]: E0219 15:11:30.439204 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:31 crc kubenswrapper[4810]: E0219 15:11:31.389370 4810 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 15:11:31 crc kubenswrapper[4810]: I0219 15:11:31.439199 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:31 crc kubenswrapper[4810]: I0219 15:11:31.439238 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:31 crc kubenswrapper[4810]: I0219 15:11:31.439312 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:31 crc kubenswrapper[4810]: E0219 15:11:31.441269 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:31 crc kubenswrapper[4810]: E0219 15:11:31.441470 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:31 crc kubenswrapper[4810]: E0219 15:11:31.441619 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:31 crc kubenswrapper[4810]: E0219 15:11:31.587533 4810 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 15:11:32 crc kubenswrapper[4810]: I0219 15:11:32.439187 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:32 crc kubenswrapper[4810]: E0219 15:11:32.439485 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:33 crc kubenswrapper[4810]: I0219 15:11:33.439247 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:33 crc kubenswrapper[4810]: I0219 15:11:33.439393 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:33 crc kubenswrapper[4810]: E0219 15:11:33.439632 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:33 crc kubenswrapper[4810]: E0219 15:11:33.439761 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:33 crc kubenswrapper[4810]: I0219 15:11:33.440213 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:33 crc kubenswrapper[4810]: E0219 15:11:33.440450 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:34 crc kubenswrapper[4810]: I0219 15:11:34.438743 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:34 crc kubenswrapper[4810]: E0219 15:11:34.439271 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:35 crc kubenswrapper[4810]: I0219 15:11:35.439109 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:35 crc kubenswrapper[4810]: E0219 15:11:35.439316 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:35 crc kubenswrapper[4810]: I0219 15:11:35.440516 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:35 crc kubenswrapper[4810]: E0219 15:11:35.440628 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:35 crc kubenswrapper[4810]: I0219 15:11:35.440809 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:35 crc kubenswrapper[4810]: E0219 15:11:35.440912 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:35 crc kubenswrapper[4810]: I0219 15:11:35.442473 4810 scope.go:117] "RemoveContainer" containerID="824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4" Feb 19 15:11:36 crc kubenswrapper[4810]: I0219 15:11:36.211632 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/3.log" Feb 19 15:11:36 crc kubenswrapper[4810]: I0219 15:11:36.214920 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerStarted","Data":"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b"} Feb 19 15:11:36 crc kubenswrapper[4810]: I0219 15:11:36.215401 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:11:36 crc kubenswrapper[4810]: I0219 15:11:36.257678 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podStartSLOduration=103.257658128 podStartE2EDuration="1m43.257658128s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:36.256317625 +0000 UTC m=+125.738347749" watchObservedRunningTime="2026-02-19 15:11:36.257658128 +0000 UTC m=+125.739688252" Feb 19 15:11:36 crc kubenswrapper[4810]: I0219 15:11:36.427415 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2x9v9"] Feb 19 15:11:36 crc kubenswrapper[4810]: I0219 15:11:36.427567 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:36 crc kubenswrapper[4810]: E0219 15:11:36.427692 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:36 crc kubenswrapper[4810]: I0219 15:11:36.438802 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:36 crc kubenswrapper[4810]: E0219 15:11:36.439934 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:36 crc kubenswrapper[4810]: E0219 15:11:36.589248 4810 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 15:11:37 crc kubenswrapper[4810]: I0219 15:11:37.439406 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:37 crc kubenswrapper[4810]: I0219 15:11:37.439559 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:37 crc kubenswrapper[4810]: E0219 15:11:37.439620 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:37 crc kubenswrapper[4810]: E0219 15:11:37.439786 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:38 crc kubenswrapper[4810]: I0219 15:11:38.438597 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:38 crc kubenswrapper[4810]: I0219 15:11:38.438622 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:38 crc kubenswrapper[4810]: E0219 15:11:38.438863 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:38 crc kubenswrapper[4810]: E0219 15:11:38.439064 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:39 crc kubenswrapper[4810]: I0219 15:11:39.438603 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:39 crc kubenswrapper[4810]: E0219 15:11:39.438778 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:39 crc kubenswrapper[4810]: I0219 15:11:39.438801 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:39 crc kubenswrapper[4810]: E0219 15:11:39.439250 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:40 crc kubenswrapper[4810]: I0219 15:11:40.439033 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:40 crc kubenswrapper[4810]: I0219 15:11:40.439124 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:40 crc kubenswrapper[4810]: E0219 15:11:40.439470 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:40 crc kubenswrapper[4810]: I0219 15:11:40.439593 4810 scope.go:117] "RemoveContainer" containerID="c1f575901c6e6174387489f4f3a7d8cfa46089db51556b6acc9245faaf36a9ca" Feb 19 15:11:40 crc kubenswrapper[4810]: E0219 15:11:40.439654 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:41 crc kubenswrapper[4810]: I0219 15:11:41.239126 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bsztz_2a45a199-beeb-4972-b796-15c958fe99d3/kube-multus/1.log" Feb 19 15:11:41 crc kubenswrapper[4810]: I0219 15:11:41.239636 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bsztz" event={"ID":"2a45a199-beeb-4972-b796-15c958fe99d3","Type":"ContainerStarted","Data":"92e30886e49380d7d876397116f7db4e85388275b36d2f8ee0ab84b9167f3dde"} Feb 19 15:11:41 crc kubenswrapper[4810]: I0219 15:11:41.438878 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:41 crc kubenswrapper[4810]: E0219 15:11:41.440472 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:41 crc kubenswrapper[4810]: I0219 15:11:41.440557 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:41 crc kubenswrapper[4810]: E0219 15:11:41.440708 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:41 crc kubenswrapper[4810]: E0219 15:11:41.589905 4810 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 15:11:42 crc kubenswrapper[4810]: I0219 15:11:42.438587 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:42 crc kubenswrapper[4810]: I0219 15:11:42.438645 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:42 crc kubenswrapper[4810]: E0219 15:11:42.438790 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:42 crc kubenswrapper[4810]: E0219 15:11:42.438914 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:43 crc kubenswrapper[4810]: I0219 15:11:43.439048 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:43 crc kubenswrapper[4810]: E0219 15:11:43.439415 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:43 crc kubenswrapper[4810]: I0219 15:11:43.439501 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:43 crc kubenswrapper[4810]: E0219 15:11:43.439657 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:44 crc kubenswrapper[4810]: I0219 15:11:44.438300 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:44 crc kubenswrapper[4810]: I0219 15:11:44.438368 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:44 crc kubenswrapper[4810]: E0219 15:11:44.438506 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:44 crc kubenswrapper[4810]: E0219 15:11:44.438684 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:45 crc kubenswrapper[4810]: I0219 15:11:45.438502 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:45 crc kubenswrapper[4810]: I0219 15:11:45.438512 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:45 crc kubenswrapper[4810]: E0219 15:11:45.438744 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 15:11:45 crc kubenswrapper[4810]: E0219 15:11:45.438813 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 15:11:46 crc kubenswrapper[4810]: I0219 15:11:46.439013 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:46 crc kubenswrapper[4810]: I0219 15:11:46.439092 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:46 crc kubenswrapper[4810]: E0219 15:11:46.439240 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 15:11:46 crc kubenswrapper[4810]: E0219 15:11:46.439663 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2x9v9" podUID="b72d3f7a-e418-4a21-af73-6a43ce3358c1" Feb 19 15:11:47 crc kubenswrapper[4810]: I0219 15:11:47.438846 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:47 crc kubenswrapper[4810]: I0219 15:11:47.439064 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:47 crc kubenswrapper[4810]: I0219 15:11:47.443966 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 15:11:47 crc kubenswrapper[4810]: I0219 15:11:47.443991 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 15:11:47 crc kubenswrapper[4810]: I0219 15:11:47.444075 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 15:11:47 crc kubenswrapper[4810]: I0219 15:11:47.444176 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 15:11:48 crc kubenswrapper[4810]: I0219 15:11:48.438602 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:48 crc kubenswrapper[4810]: I0219 15:11:48.438671 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:11:48 crc kubenswrapper[4810]: I0219 15:11:48.442464 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 15:11:48 crc kubenswrapper[4810]: I0219 15:11:48.443119 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.124943 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.172626 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5d4rp"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.173261 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.175615 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.175797 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.176563 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.176672 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.176929 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.177069 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.178264 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.178528 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.183881 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.183894 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.196178 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-l66cb"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.196802 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.198596 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.201490 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.201909 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.202144 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.202432 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.202560 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sl5p9"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.202928 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.203576 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.203901 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.204737 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.205121 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.206463 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.206637 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.206858 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.207763 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.207919 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.208231 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.209440 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.209765 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.211309 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jm2nk"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.211674 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.214250 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.216297 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.216493 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.216678 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.216808 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.216855 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.217028 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.217067 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.217335 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wpzzq"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.217636 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wpzzq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.217878 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.218068 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.218517 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.218744 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.218835 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.219639 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.220774 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.220831 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.221072 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.221173 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.221458 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.221492 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.221558 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.221648 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.221653 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.221919 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.222260 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.222383 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.222496 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.222579 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.222706 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.222745 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.222857 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.222902 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.222979 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.230917 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d8pqg"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.231358 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.249974 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.267029 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.267647 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.267958 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.268252 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.268599 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.268831 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.268964 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-audit-dir\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.269000 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-node-pullsecrets\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.269019 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-config\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.269034 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-image-import-ca\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.269078 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-etcd-client\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.269095 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-serving-cert\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.269108 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.269121 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-encryption-config\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.269144 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-etcd-serving-ca\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.269158 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-audit\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.269170 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2zrj\" (UniqueName: \"kubernetes.io/projected/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-kube-api-access-j2zrj\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.270759 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.270908 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.271115 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.271307 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.271422 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.271811 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.271827 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.273411 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.273636 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.273787 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.274064 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.276458 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.276499 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.276770 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.276883 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.277064 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.278713 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.279620 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.294730 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-dkppn"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.295076 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4hddt"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.295341 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.295403 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hvw7f"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.295495 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.295551 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dkppn" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.297344 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.297581 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.297806 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zrsn2"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.298135 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.298486 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r74mv"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.298716 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.299038 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.299294 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.299696 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.299960 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.300118 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.300201 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.300361 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.300880 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.300180 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.302068 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zrsn2" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.302247 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.302284 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.302559 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.302629 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.302989 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303062 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303096 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303139 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303239 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303271 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303291 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303461 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303518 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303525 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303541 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303649 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303697 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303721 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303727 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303741 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303812 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303820 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303902 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303834 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.303914 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.331680 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.332560 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-25cmt"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.333249 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.333580 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.334200 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.344679 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.345685 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.345913 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.346180 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.346293 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.388690 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.389590 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.390258 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.396823 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.397958 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c696fe96-0485-44d0-b4fb-161503c334e8-serving-cert\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.397400 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.398771 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.399096 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.399345 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/77544608-a940-4c0c-9a1a-a5a98f480134-auth-proxy-config\") pod \"machine-approver-56656f9798-8mxdc\" (UID: \"77544608-a940-4c0c-9a1a-a5a98f480134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.399437 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.399583 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4sn5\" (UniqueName: \"kubernetes.io/projected/fee373cf-50b8-42f4-b30d-4a3d230ca27e-kube-api-access-v4sn5\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.399743 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-etcd-serving-ca\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.399851 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-encryption-config\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.399935 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2ff8\" (UniqueName: \"kubernetes.io/projected/e8c9b478-4884-4c32-acf1-5fdec0cfac06-kube-api-access-v2ff8\") pod \"dns-operator-744455d44c-wpzzq\" (UID: \"e8c9b478-4884-4c32-acf1-5fdec0cfac06\") " pod="openshift-dns-operator/dns-operator-744455d44c-wpzzq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.400206 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-audit\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.400247 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.400272 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-console-config\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.400368 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e4ea5aa-2074-4100-a916-6bdfb3331d43-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xhg7l\" (UID: \"0e4ea5aa-2074-4100-a916-6bdfb3331d43\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.400393 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l478x\" (UniqueName: \"kubernetes.io/projected/ad43df2c-4944-45e2-919f-0c297f4092d4-kube-api-access-l478x\") pod \"route-controller-manager-6576b87f9c-n8zth\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.400801 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.401356 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-etcd-serving-ca\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.402379 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-audit\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.402489 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.402736 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.402986 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.403045 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fee373cf-50b8-42f4-b30d-4a3d230ca27e-etcd-service-ca\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.403066 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4b9328-0efe-42a8-9a73-a80eb6a26151-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kclsk\" (UID: \"8f4b9328-0efe-42a8-9a73-a80eb6a26151\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.403100 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hxq2\" (UniqueName: \"kubernetes.io/projected/c696fe96-0485-44d0-b4fb-161503c334e8-kube-api-access-9hxq2\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.403441 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.403537 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7bnq2"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.403570 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/362cd55c-b576-44bd-843c-078bf26b3b1e-console-oauth-config\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.403837 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/239280e2-b335-4f87-89a8-00cb6f8e3c69-config\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.403886 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/239280e2-b335-4f87-89a8-00cb6f8e3c69-service-ca-bundle\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.403909 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c9k2\" (UniqueName: \"kubernetes.io/projected/0f46b00f-f770-4539-92f9-60e1146308ab-kube-api-access-7c9k2\") pod \"machine-config-controller-84d6567774-f4qrf\" (UID: \"0f46b00f-f770-4539-92f9-60e1146308ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.403959 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-config\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404001 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-image-import-ca\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404019 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c696fe96-0485-44d0-b4fb-161503c334e8-etcd-client\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404046 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x4fb\" (UniqueName: \"kubernetes.io/projected/9a7776ca-1a56-4eca-9e44-ba1b7b15510f-kube-api-access-7x4fb\") pod \"machine-api-operator-5694c8668f-l66cb\" (UID: \"9a7776ca-1a56-4eca-9e44-ba1b7b15510f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404065 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e4ea5aa-2074-4100-a916-6bdfb3331d43-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xhg7l\" (UID: \"0e4ea5aa-2074-4100-a916-6bdfb3331d43\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404084 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87t8g\" (UniqueName: \"kubernetes.io/projected/0e4ea5aa-2074-4100-a916-6bdfb3331d43-kube-api-access-87t8g\") pod \"cluster-image-registry-operator-dc59b4c8b-xhg7l\" (UID: \"0e4ea5aa-2074-4100-a916-6bdfb3331d43\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404109 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-config\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404128 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0f46b00f-f770-4539-92f9-60e1146308ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f4qrf\" (UID: \"0f46b00f-f770-4539-92f9-60e1146308ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404153 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad43df2c-4944-45e2-919f-0c297f4092d4-client-ca\") pod \"route-controller-manager-6576b87f9c-n8zth\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404170 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/239280e2-b335-4f87-89a8-00cb6f8e3c69-serving-cert\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404205 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a7776ca-1a56-4eca-9e44-ba1b7b15510f-config\") pod \"machine-api-operator-5694c8668f-l66cb\" (UID: \"9a7776ca-1a56-4eca-9e44-ba1b7b15510f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404227 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee373cf-50b8-42f4-b30d-4a3d230ca27e-config\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404249 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4b9328-0efe-42a8-9a73-a80eb6a26151-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kclsk\" (UID: \"8f4b9328-0efe-42a8-9a73-a80eb6a26151\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404269 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad43df2c-4944-45e2-919f-0c297f4092d4-serving-cert\") pod \"route-controller-manager-6576b87f9c-n8zth\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404302 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c696fe96-0485-44d0-b4fb-161503c334e8-audit-dir\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404319 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fee373cf-50b8-42f4-b30d-4a3d230ca27e-serving-cert\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404355 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mbjx\" (UniqueName: \"kubernetes.io/projected/7a29951a-027e-49b4-a7ea-a8e363942414-kube-api-access-2mbjx\") pod \"downloads-7954f5f757-dkppn\" (UID: \"7a29951a-027e-49b4-a7ea-a8e363942414\") " pod="openshift-console/downloads-7954f5f757-dkppn" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404583 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-serving-cert\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.404633 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.405053 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.405147 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-config\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.405737 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.406049 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.406528 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sl5p9"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407168 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad43df2c-4944-45e2-919f-0c297f4092d4-config\") pod \"route-controller-manager-6576b87f9c-n8zth\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407290 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2zrj\" (UniqueName: \"kubernetes.io/projected/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-kube-api-access-j2zrj\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407397 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c696fe96-0485-44d0-b4fb-161503c334e8-audit-policies\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407500 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407592 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-trusted-ca-bundle\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407632 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnqb7\" (UniqueName: \"kubernetes.io/projected/362cd55c-b576-44bd-843c-078bf26b3b1e-kube-api-access-pnqb7\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407671 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-serving-cert\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407709 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh895\" (UniqueName: \"kubernetes.io/projected/25e05c8a-335b-405c-9033-f689c21c5ecc-kube-api-access-jh895\") pod \"cluster-samples-operator-665b6dd947-xhzlb\" (UID: \"25e05c8a-335b-405c-9033-f689c21c5ecc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407734 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr7hk\" (UniqueName: \"kubernetes.io/projected/f7aa90a6-963b-4ff0-b6de-ad46fa896e18-kube-api-access-kr7hk\") pod \"openshift-apiserver-operator-796bbdcf4f-55x9c\" (UID: \"f7aa90a6-963b-4ff0-b6de-ad46fa896e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407830 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c696fe96-0485-44d0-b4fb-161503c334e8-encryption-config\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407856 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh2sf\" (UniqueName: \"kubernetes.io/projected/77544608-a940-4c0c-9a1a-a5a98f480134-kube-api-access-jh2sf\") pod \"machine-approver-56656f9798-8mxdc\" (UID: \"77544608-a940-4c0c-9a1a-a5a98f480134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407895 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fee373cf-50b8-42f4-b30d-4a3d230ca27e-etcd-ca\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407931 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-client-ca\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407962 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/362cd55c-b576-44bd-843c-078bf26b3b1e-console-serving-cert\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.407985 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c696fe96-0485-44d0-b4fb-161503c334e8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408008 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a7776ca-1a56-4eca-9e44-ba1b7b15510f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-l66cb\" (UID: \"9a7776ca-1a56-4eca-9e44-ba1b7b15510f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408015 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-encryption-config\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408034 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7aa90a6-963b-4ff0-b6de-ad46fa896e18-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-55x9c\" (UID: \"f7aa90a6-963b-4ff0-b6de-ad46fa896e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408076 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgcbs\" (UniqueName: \"kubernetes.io/projected/8f4b9328-0efe-42a8-9a73-a80eb6a26151-kube-api-access-hgcbs\") pod \"openshift-controller-manager-operator-756b6f6bc6-kclsk\" (UID: \"8f4b9328-0efe-42a8-9a73-a80eb6a26151\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408106 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f46b00f-f770-4539-92f9-60e1146308ab-proxy-tls\") pod \"machine-config-controller-84d6567774-f4qrf\" (UID: \"0f46b00f-f770-4539-92f9-60e1146308ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408145 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-audit-dir\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408179 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fee373cf-50b8-42f4-b30d-4a3d230ca27e-etcd-client\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408214 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-audit-dir\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408257 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-node-pullsecrets\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408292 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-service-ca\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408317 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k7k5\" (UniqueName: \"kubernetes.io/projected/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-kube-api-access-8k7k5\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408362 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a7776ca-1a56-4eca-9e44-ba1b7b15510f-images\") pod \"machine-api-operator-5694c8668f-l66cb\" (UID: \"9a7776ca-1a56-4eca-9e44-ba1b7b15510f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408380 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7aa90a6-963b-4ff0-b6de-ad46fa896e18-config\") pod \"openshift-apiserver-operator-796bbdcf4f-55x9c\" (UID: \"f7aa90a6-963b-4ff0-b6de-ad46fa896e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408450 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-node-pullsecrets\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408500 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-oauth-serving-cert\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408530 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8c9b478-4884-4c32-acf1-5fdec0cfac06-metrics-tls\") pod \"dns-operator-744455d44c-wpzzq\" (UID: \"e8c9b478-4884-4c32-acf1-5fdec0cfac06\") " pod="openshift-dns-operator/dns-operator-744455d44c-wpzzq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408557 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/25e05c8a-335b-405c-9033-f689c21c5ecc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xhzlb\" (UID: \"25e05c8a-335b-405c-9033-f689c21c5ecc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408604 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c696fe96-0485-44d0-b4fb-161503c334e8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408637 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/77544608-a940-4c0c-9a1a-a5a98f480134-machine-approver-tls\") pod \"machine-approver-56656f9798-8mxdc\" (UID: \"77544608-a940-4c0c-9a1a-a5a98f480134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408668 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gzjq\" (UniqueName: \"kubernetes.io/projected/239280e2-b335-4f87-89a8-00cb6f8e3c69-kube-api-access-8gzjq\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408765 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77544608-a940-4c0c-9a1a-a5a98f480134-config\") pod \"machine-approver-56656f9798-8mxdc\" (UID: \"77544608-a940-4c0c-9a1a-a5a98f480134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408800 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/239280e2-b335-4f87-89a8-00cb6f8e3c69-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408842 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-etcd-client\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.408883 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e4ea5aa-2074-4100-a916-6bdfb3331d43-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xhg7l\" (UID: \"0e4ea5aa-2074-4100-a916-6bdfb3331d43\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.413638 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w6k69"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.413808 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.414460 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r2tqm"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.414468 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-image-import-ca\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.414570 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w6k69" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.414962 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.415605 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-etcd-client\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.415873 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.417032 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-serving-cert\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.419386 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-frmnw"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.420269 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.420354 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.421045 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.424568 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5d4rp"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.425501 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.427828 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.427915 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-h68pj"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.428608 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.430685 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jm2nk"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.435516 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.435628 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-l66cb"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.447066 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.447106 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.459388 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-zgcrd"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.461348 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.461552 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.461833 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zgcrd" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.462584 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4hddt"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.463346 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.464926 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.465058 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.469841 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.476465 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.482471 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.485344 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dkppn"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.489407 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.491954 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zgcrd"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.493338 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d8pqg"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.496707 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zrsn2"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.496956 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.501007 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-25cmt"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.503005 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.506704 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.510224 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wpzzq"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.511087 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76ffdcba-57d6-4636-8373-f088926a716d-stats-auth\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.511275 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/77544608-a940-4c0c-9a1a-a5a98f480134-auth-proxy-config\") pod \"machine-approver-56656f9798-8mxdc\" (UID: \"77544608-a940-4c0c-9a1a-a5a98f480134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.511441 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2ff8\" (UniqueName: \"kubernetes.io/projected/e8c9b478-4884-4c32-acf1-5fdec0cfac06-kube-api-access-v2ff8\") pod \"dns-operator-744455d44c-wpzzq\" (UID: \"e8c9b478-4884-4c32-acf1-5fdec0cfac06\") " pod="openshift-dns-operator/dns-operator-744455d44c-wpzzq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.511585 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-console-config\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.511765 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e4ea5aa-2074-4100-a916-6bdfb3331d43-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xhg7l\" (UID: \"0e4ea5aa-2074-4100-a916-6bdfb3331d43\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.511955 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76ffdcba-57d6-4636-8373-f088926a716d-service-ca-bundle\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.512231 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmvtp\" (UniqueName: \"kubernetes.io/projected/6a43ce91-6673-4641-a2d6-551afe72688d-kube-api-access-xmvtp\") pod \"packageserver-d55dfcdfc-hmxtq\" (UID: \"6a43ce91-6673-4641-a2d6-551afe72688d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.512941 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/362cd55c-b576-44bd-843c-078bf26b3b1e-console-oauth-config\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.513099 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/239280e2-b335-4f87-89a8-00cb6f8e3c69-config\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.512812 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-console-config\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.512298 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/77544608-a940-4c0c-9a1a-a5a98f480134-auth-proxy-config\") pod \"machine-approver-56656f9798-8mxdc\" (UID: \"77544608-a940-4c0c-9a1a-a5a98f480134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.513715 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5cc06c55-7085-4cc0-8399-833b4243b51e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-65ltw\" (UID: \"5cc06c55-7085-4cc0-8399-833b4243b51e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.514082 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c9k2\" (UniqueName: \"kubernetes.io/projected/0f46b00f-f770-4539-92f9-60e1146308ab-kube-api-access-7c9k2\") pod \"machine-config-controller-84d6567774-f4qrf\" (UID: \"0f46b00f-f770-4539-92f9-60e1146308ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.514242 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a2812-dc11-4b60-b911-bb41a0d39d7d-config\") pod \"kube-apiserver-operator-766d6c64bb-9lqqd\" (UID: \"026a2812-dc11-4b60-b911-bb41a0d39d7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.514517 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.514697 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ef2828b3-f501-4105-abc8-6b1ce9658301-srv-cert\") pod \"catalog-operator-68c6474976-2phqk\" (UID: \"ef2828b3-f501-4105-abc8-6b1ce9658301\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.514814 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87t8g\" (UniqueName: \"kubernetes.io/projected/0e4ea5aa-2074-4100-a916-6bdfb3331d43-kube-api-access-87t8g\") pod \"cluster-image-registry-operator-dc59b4c8b-xhg7l\" (UID: \"0e4ea5aa-2074-4100-a916-6bdfb3331d43\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.514966 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-config\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.515081 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0f46b00f-f770-4539-92f9-60e1146308ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f4qrf\" (UID: \"0f46b00f-f770-4539-92f9-60e1146308ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.515407 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-audit-policies\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.515523 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad43df2c-4944-45e2-919f-0c297f4092d4-client-ca\") pod \"route-controller-manager-6576b87f9c-n8zth\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.515635 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/239280e2-b335-4f87-89a8-00cb6f8e3c69-serving-cert\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.515739 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a7776ca-1a56-4eca-9e44-ba1b7b15510f-config\") pod \"machine-api-operator-5694c8668f-l66cb\" (UID: \"9a7776ca-1a56-4eca-9e44-ba1b7b15510f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.516066 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/239280e2-b335-4f87-89a8-00cb6f8e3c69-config\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.516382 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-config\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.516549 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee373cf-50b8-42f4-b30d-4a3d230ca27e-config\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.516680 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4b9328-0efe-42a8-9a73-a80eb6a26151-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kclsk\" (UID: \"8f4b9328-0efe-42a8-9a73-a80eb6a26151\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.516801 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63674760-f499-49b8-a575-a8ae954eada4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kmhmh\" (UID: \"63674760-f499-49b8-a575-a8ae954eada4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517000 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6a43ce91-6673-4641-a2d6-551afe72688d-tmpfs\") pod \"packageserver-d55dfcdfc-hmxtq\" (UID: \"6a43ce91-6673-4641-a2d6-551afe72688d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517093 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a43ce91-6673-4641-a2d6-551afe72688d-apiservice-cert\") pod \"packageserver-d55dfcdfc-hmxtq\" (UID: \"6a43ce91-6673-4641-a2d6-551afe72688d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517177 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad43df2c-4944-45e2-919f-0c297f4092d4-config\") pod \"route-controller-manager-6576b87f9c-n8zth\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517271 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2nn8b\" (UID: \"b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517390 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr7hk\" (UniqueName: \"kubernetes.io/projected/f7aa90a6-963b-4ff0-b6de-ad46fa896e18-kube-api-access-kr7hk\") pod \"openshift-apiserver-operator-796bbdcf4f-55x9c\" (UID: \"f7aa90a6-963b-4ff0-b6de-ad46fa896e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517479 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fee373cf-50b8-42f4-b30d-4a3d230ca27e-etcd-ca\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517563 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4b9328-0efe-42a8-9a73-a80eb6a26151-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kclsk\" (UID: \"8f4b9328-0efe-42a8-9a73-a80eb6a26151\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517578 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-client-ca\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517615 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad43df2c-4944-45e2-919f-0c297f4092d4-client-ca\") pod \"route-controller-manager-6576b87f9c-n8zth\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517680 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517685 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a7776ca-1a56-4eca-9e44-ba1b7b15510f-config\") pod \"machine-api-operator-5694c8668f-l66cb\" (UID: \"9a7776ca-1a56-4eca-9e44-ba1b7b15510f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517759 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee373cf-50b8-42f4-b30d-4a3d230ca27e-config\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517763 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0f46b00f-f770-4539-92f9-60e1146308ab-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f4qrf\" (UID: \"0f46b00f-f770-4539-92f9-60e1146308ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.517756 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1e6fd37-d069-441e-9c0f-caf60d8f8d4f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8j2gh\" (UID: \"b1e6fd37-d069-441e-9c0f-caf60d8f8d4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.518274 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fee373cf-50b8-42f4-b30d-4a3d230ca27e-etcd-ca\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.518293 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a7776ca-1a56-4eca-9e44-ba1b7b15510f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-l66cb\" (UID: \"9a7776ca-1a56-4eca-9e44-ba1b7b15510f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.518339 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7aa90a6-963b-4ff0-b6de-ad46fa896e18-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-55x9c\" (UID: \"f7aa90a6-963b-4ff0-b6de-ad46fa896e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.518364 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76ffdcba-57d6-4636-8373-f088926a716d-metrics-certs\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.518387 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.518389 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f46b00f-f770-4539-92f9-60e1146308ab-proxy-tls\") pod \"machine-config-controller-84d6567774-f4qrf\" (UID: \"0f46b00f-f770-4539-92f9-60e1146308ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.519757 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fee373cf-50b8-42f4-b30d-4a3d230ca27e-etcd-client\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.519789 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqq2f\" (UniqueName: \"kubernetes.io/projected/13f4a10d-50aa-41ec-9931-cb835ba1f54c-kube-api-access-zqq2f\") pod \"multus-admission-controller-857f4d67dd-zrsn2\" (UID: \"13f4a10d-50aa-41ec-9931-cb835ba1f54c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zrsn2" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.519812 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-service-ca\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.519834 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.519867 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-oauth-serving-cert\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.519882 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-client-ca\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.519886 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8c9b478-4884-4c32-acf1-5fdec0cfac06-metrics-tls\") pod \"dns-operator-744455d44c-wpzzq\" (UID: \"e8c9b478-4884-4c32-acf1-5fdec0cfac06\") " pod="openshift-dns-operator/dns-operator-744455d44c-wpzzq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.519929 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c696fe96-0485-44d0-b4fb-161503c334e8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.519947 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/77544608-a940-4c0c-9a1a-a5a98f480134-machine-approver-tls\") pod \"machine-approver-56656f9798-8mxdc\" (UID: \"77544608-a940-4c0c-9a1a-a5a98f480134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.519967 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gzjq\" (UniqueName: \"kubernetes.io/projected/239280e2-b335-4f87-89a8-00cb6f8e3c69-kube-api-access-8gzjq\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.519993 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2f48\" (UniqueName: \"kubernetes.io/projected/63674760-f499-49b8-a575-a8ae954eada4-kube-api-access-d2f48\") pod \"ingress-operator-5b745b69d9-kmhmh\" (UID: \"63674760-f499-49b8-a575-a8ae954eada4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.520017 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.520050 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77544608-a940-4c0c-9a1a-a5a98f480134-config\") pod \"machine-approver-56656f9798-8mxdc\" (UID: \"77544608-a940-4c0c-9a1a-a5a98f480134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.520595 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad43df2c-4944-45e2-919f-0c297f4092d4-config\") pod \"route-controller-manager-6576b87f9c-n8zth\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.521827 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/362cd55c-b576-44bd-843c-078bf26b3b1e-console-oauth-config\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.524163 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.525660 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.526560 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c696fe96-0485-44d0-b4fb-161503c334e8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.526574 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/239280e2-b335-4f87-89a8-00cb6f8e3c69-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.526673 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.526734 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e4ea5aa-2074-4100-a916-6bdfb3331d43-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xhg7l\" (UID: \"0e4ea5aa-2074-4100-a916-6bdfb3331d43\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.526765 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c696fe96-0485-44d0-b4fb-161503c334e8-serving-cert\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.526802 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-service-ca\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.526607 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/239280e2-b335-4f87-89a8-00cb6f8e3c69-serving-cert\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.526811 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4sn5\" (UniqueName: \"kubernetes.io/projected/fee373cf-50b8-42f4-b30d-4a3d230ca27e-kube-api-access-v4sn5\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527191 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ef2828b3-f501-4105-abc8-6b1ce9658301-profile-collector-cert\") pod \"catalog-operator-68c6474976-2phqk\" (UID: \"ef2828b3-f501-4105-abc8-6b1ce9658301\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527219 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbnnp\" (UniqueName: \"kubernetes.io/projected/bd317655-38ea-4fdb-95d0-82adc08456a8-kube-api-access-dbnnp\") pod \"openshift-config-operator-7777fb866f-25cmt\" (UID: \"bd317655-38ea-4fdb-95d0-82adc08456a8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527278 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l478x\" (UniqueName: \"kubernetes.io/projected/ad43df2c-4944-45e2-919f-0c297f4092d4-kube-api-access-l478x\") pod \"route-controller-manager-6576b87f9c-n8zth\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527299 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/026a2812-dc11-4b60-b911-bb41a0d39d7d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9lqqd\" (UID: \"026a2812-dc11-4b60-b911-bb41a0d39d7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527357 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7a0b611-cb8e-431b-b527-b6164471c85f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-l924n\" (UID: \"b7a0b611-cb8e-431b-b527-b6164471c85f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527380 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527403 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fee373cf-50b8-42f4-b30d-4a3d230ca27e-etcd-service-ca\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527422 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4b9328-0efe-42a8-9a73-a80eb6a26151-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kclsk\" (UID: \"8f4b9328-0efe-42a8-9a73-a80eb6a26151\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527444 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1e6fd37-d069-441e-9c0f-caf60d8f8d4f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8j2gh\" (UID: \"b1e6fd37-d069-441e-9c0f-caf60d8f8d4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527466 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527487 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527509 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hxq2\" (UniqueName: \"kubernetes.io/projected/c696fe96-0485-44d0-b4fb-161503c334e8-kube-api-access-9hxq2\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527531 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63674760-f499-49b8-a575-a8ae954eada4-metrics-tls\") pod \"ingress-operator-5b745b69d9-kmhmh\" (UID: \"63674760-f499-49b8-a575-a8ae954eada4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527552 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/13f4a10d-50aa-41ec-9931-cb835ba1f54c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zrsn2\" (UID: \"13f4a10d-50aa-41ec-9931-cb835ba1f54c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zrsn2" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527569 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527590 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/239280e2-b335-4f87-89a8-00cb6f8e3c69-service-ca-bundle\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527610 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5cc06c55-7085-4cc0-8399-833b4243b51e-images\") pod \"machine-config-operator-74547568cd-65ltw\" (UID: \"5cc06c55-7085-4cc0-8399-833b4243b51e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527632 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ff4j\" (UniqueName: \"kubernetes.io/projected/b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3-kube-api-access-8ff4j\") pod \"package-server-manager-789f6589d5-2nn8b\" (UID: \"b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527656 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd317655-38ea-4fdb-95d0-82adc08456a8-serving-cert\") pod \"openshift-config-operator-7777fb866f-25cmt\" (UID: \"bd317655-38ea-4fdb-95d0-82adc08456a8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527676 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527784 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77544608-a940-4c0c-9a1a-a5a98f480134-config\") pod \"machine-approver-56656f9798-8mxdc\" (UID: \"77544608-a940-4c0c-9a1a-a5a98f480134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527944 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e8c9b478-4884-4c32-acf1-5fdec0cfac06-metrics-tls\") pod \"dns-operator-744455d44c-wpzzq\" (UID: \"e8c9b478-4884-4c32-acf1-5fdec0cfac06\") " pod="openshift-dns-operator/dns-operator-744455d44c-wpzzq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.528130 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7aa90a6-963b-4ff0-b6de-ad46fa896e18-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-55x9c\" (UID: \"f7aa90a6-963b-4ff0-b6de-ad46fa896e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.528188 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.528269 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/239280e2-b335-4f87-89a8-00cb6f8e3c69-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.528358 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.528891 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/239280e2-b335-4f87-89a8-00cb6f8e3c69-service-ca-bundle\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.528933 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.527697 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c696fe96-0485-44d0-b4fb-161503c334e8-etcd-client\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.529570 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x4fb\" (UniqueName: \"kubernetes.io/projected/9a7776ca-1a56-4eca-9e44-ba1b7b15510f-kube-api-access-7x4fb\") pod \"machine-api-operator-5694c8668f-l66cb\" (UID: \"9a7776ca-1a56-4eca-9e44-ba1b7b15510f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.529254 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-oauth-serving-cert\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.529528 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fee373cf-50b8-42f4-b30d-4a3d230ca27e-etcd-service-ca\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.529262 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fee373cf-50b8-42f4-b30d-4a3d230ca27e-etcd-client\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.529822 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e4ea5aa-2074-4100-a916-6bdfb3331d43-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xhg7l\" (UID: \"0e4ea5aa-2074-4100-a916-6bdfb3331d43\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.530277 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a7776ca-1a56-4eca-9e44-ba1b7b15510f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-l66cb\" (UID: \"9a7776ca-1a56-4eca-9e44-ba1b7b15510f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.530621 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e4ea5aa-2074-4100-a916-6bdfb3331d43-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xhg7l\" (UID: \"0e4ea5aa-2074-4100-a916-6bdfb3331d43\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.530999 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r74mv"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.531664 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0e4ea5aa-2074-4100-a916-6bdfb3331d43-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xhg7l\" (UID: \"0e4ea5aa-2074-4100-a916-6bdfb3331d43\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.531769 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f4b9328-0efe-42a8-9a73-a80eb6a26151-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kclsk\" (UID: \"8f4b9328-0efe-42a8-9a73-a80eb6a26151\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.531797 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/026a2812-dc11-4b60-b911-bb41a0d39d7d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9lqqd\" (UID: \"026a2812-dc11-4b60-b911-bb41a0d39d7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.531823 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad43df2c-4944-45e2-919f-0c297f4092d4-serving-cert\") pod \"route-controller-manager-6576b87f9c-n8zth\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.531841 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c696fe96-0485-44d0-b4fb-161503c334e8-audit-dir\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.531862 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fee373cf-50b8-42f4-b30d-4a3d230ca27e-serving-cert\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.531894 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mbjx\" (UniqueName: \"kubernetes.io/projected/7a29951a-027e-49b4-a7ea-a8e363942414-kube-api-access-2mbjx\") pod \"downloads-7954f5f757-dkppn\" (UID: \"7a29951a-027e-49b4-a7ea-a8e363942414\") " pod="openshift-console/downloads-7954f5f757-dkppn" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.531928 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e6fd37-d069-441e-9c0f-caf60d8f8d4f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8j2gh\" (UID: \"b1e6fd37-d069-441e-9c0f-caf60d8f8d4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.531949 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a43ce91-6673-4641-a2d6-551afe72688d-webhook-cert\") pod \"packageserver-d55dfcdfc-hmxtq\" (UID: \"6a43ce91-6673-4641-a2d6-551afe72688d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.531983 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63674760-f499-49b8-a575-a8ae954eada4-trusted-ca\") pod \"ingress-operator-5b745b69d9-kmhmh\" (UID: \"63674760-f499-49b8-a575-a8ae954eada4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532003 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532032 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c696fe96-0485-44d0-b4fb-161503c334e8-audit-policies\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532052 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-trusted-ca-bundle\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532073 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnqb7\" (UniqueName: \"kubernetes.io/projected/362cd55c-b576-44bd-843c-078bf26b3b1e-kube-api-access-pnqb7\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532091 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-serving-cert\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532115 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh895\" (UniqueName: \"kubernetes.io/projected/25e05c8a-335b-405c-9033-f689c21c5ecc-kube-api-access-jh895\") pod \"cluster-samples-operator-665b6dd947-xhzlb\" (UID: \"25e05c8a-335b-405c-9033-f689c21c5ecc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532135 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7a0b611-cb8e-431b-b527-b6164471c85f-config\") pod \"kube-controller-manager-operator-78b949d7b-l924n\" (UID: \"b7a0b611-cb8e-431b-b527-b6164471c85f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532176 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bd317655-38ea-4fdb-95d0-82adc08456a8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-25cmt\" (UID: \"bd317655-38ea-4fdb-95d0-82adc08456a8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532194 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5cc06c55-7085-4cc0-8399-833b4243b51e-proxy-tls\") pod \"machine-config-operator-74547568cd-65ltw\" (UID: \"5cc06c55-7085-4cc0-8399-833b4243b51e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532216 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c696fe96-0485-44d0-b4fb-161503c334e8-encryption-config\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532237 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh2sf\" (UniqueName: \"kubernetes.io/projected/77544608-a940-4c0c-9a1a-a5a98f480134-kube-api-access-jh2sf\") pod \"machine-approver-56656f9798-8mxdc\" (UID: \"77544608-a940-4c0c-9a1a-a5a98f480134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532256 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7a0b611-cb8e-431b-b527-b6164471c85f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-l924n\" (UID: \"b7a0b611-cb8e-431b-b527-b6164471c85f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532279 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lhl5\" (UniqueName: \"kubernetes.io/projected/76ffdcba-57d6-4636-8373-f088926a716d-kube-api-access-7lhl5\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532297 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532315 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532350 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/362cd55c-b576-44bd-843c-078bf26b3b1e-console-serving-cert\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532367 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c696fe96-0485-44d0-b4fb-161503c334e8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532386 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgcbs\" (UniqueName: \"kubernetes.io/projected/8f4b9328-0efe-42a8-9a73-a80eb6a26151-kube-api-access-hgcbs\") pod \"openshift-controller-manager-operator-756b6f6bc6-kclsk\" (UID: \"8f4b9328-0efe-42a8-9a73-a80eb6a26151\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532408 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k7k5\" (UniqueName: \"kubernetes.io/projected/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-kube-api-access-8k7k5\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532425 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a7776ca-1a56-4eca-9e44-ba1b7b15510f-images\") pod \"machine-api-operator-5694c8668f-l66cb\" (UID: \"9a7776ca-1a56-4eca-9e44-ba1b7b15510f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532441 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7aa90a6-963b-4ff0-b6de-ad46fa896e18-config\") pod \"openshift-apiserver-operator-796bbdcf4f-55x9c\" (UID: \"f7aa90a6-963b-4ff0-b6de-ad46fa896e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532459 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76ffdcba-57d6-4636-8373-f088926a716d-default-certificate\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532477 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/25e05c8a-335b-405c-9033-f689c21c5ecc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xhzlb\" (UID: \"25e05c8a-335b-405c-9033-f689c21c5ecc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532494 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-audit-dir\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532517 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxw8d\" (UniqueName: \"kubernetes.io/projected/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-kube-api-access-zxw8d\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532538 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9hrx\" (UniqueName: \"kubernetes.io/projected/ef2828b3-f501-4105-abc8-6b1ce9658301-kube-api-access-x9hrx\") pod \"catalog-operator-68c6474976-2phqk\" (UID: \"ef2828b3-f501-4105-abc8-6b1ce9658301\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532556 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xt4\" (UniqueName: \"kubernetes.io/projected/5cc06c55-7085-4cc0-8399-833b4243b51e-kube-api-access-x9xt4\") pod \"machine-config-operator-74547568cd-65ltw\" (UID: \"5cc06c55-7085-4cc0-8399-833b4243b51e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532885 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c696fe96-0485-44d0-b4fb-161503c334e8-audit-policies\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.532949 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c696fe96-0485-44d0-b4fb-161503c334e8-audit-dir\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.533083 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4nds4"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.533580 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7aa90a6-963b-4ff0-b6de-ad46fa896e18-config\") pod \"openshift-apiserver-operator-796bbdcf4f-55x9c\" (UID: \"f7aa90a6-963b-4ff0-b6de-ad46fa896e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.534456 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c696fe96-0485-44d0-b4fb-161503c334e8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.534541 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c696fe96-0485-44d0-b4fb-161503c334e8-etcd-client\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.534650 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-trusted-ca-bundle\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.535450 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.535725 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c696fe96-0485-44d0-b4fb-161503c334e8-serving-cert\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.536598 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a7776ca-1a56-4eca-9e44-ba1b7b15510f-images\") pod \"machine-api-operator-5694c8668f-l66cb\" (UID: \"9a7776ca-1a56-4eca-9e44-ba1b7b15510f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.537650 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fee373cf-50b8-42f4-b30d-4a3d230ca27e-serving-cert\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.537792 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/362cd55c-b576-44bd-843c-078bf26b3b1e-console-serving-cert\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.537879 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-hlw9s"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.538161 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/77544608-a940-4c0c-9a1a-a5a98f480134-machine-approver-tls\") pod \"machine-approver-56656f9798-8mxdc\" (UID: \"77544608-a940-4c0c-9a1a-a5a98f480134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.538401 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.538682 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hlw9s" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.539114 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/25e05c8a-335b-405c-9033-f689c21c5ecc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xhzlb\" (UID: \"25e05c8a-335b-405c-9033-f689c21c5ecc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.539461 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-serving-cert\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.539861 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.540237 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c696fe96-0485-44d0-b4fb-161503c334e8-encryption-config\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.541817 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w6k69"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.543586 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.544134 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad43df2c-4944-45e2-919f-0c297f4092d4-serving-cert\") pod \"route-controller-manager-6576b87f9c-n8zth\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.544844 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-frmnw"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.547228 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.548714 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7bnq2"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.549916 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4nds4"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.551422 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-h68pj"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.553033 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r2tqm"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.554519 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cdfxh"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.555270 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cdfxh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.556245 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cdfxh"] Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.576146 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.596753 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.616570 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633493 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2f48\" (UniqueName: \"kubernetes.io/projected/63674760-f499-49b8-a575-a8ae954eada4-kube-api-access-d2f48\") pod \"ingress-operator-5b745b69d9-kmhmh\" (UID: \"63674760-f499-49b8-a575-a8ae954eada4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633540 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633564 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633609 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ef2828b3-f501-4105-abc8-6b1ce9658301-profile-collector-cert\") pod \"catalog-operator-68c6474976-2phqk\" (UID: \"ef2828b3-f501-4105-abc8-6b1ce9658301\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633632 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbnnp\" (UniqueName: \"kubernetes.io/projected/bd317655-38ea-4fdb-95d0-82adc08456a8-kube-api-access-dbnnp\") pod \"openshift-config-operator-7777fb866f-25cmt\" (UID: \"bd317655-38ea-4fdb-95d0-82adc08456a8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633660 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/026a2812-dc11-4b60-b911-bb41a0d39d7d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9lqqd\" (UID: \"026a2812-dc11-4b60-b911-bb41a0d39d7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633681 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7a0b611-cb8e-431b-b527-b6164471c85f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-l924n\" (UID: \"b7a0b611-cb8e-431b-b527-b6164471c85f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633713 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1e6fd37-d069-441e-9c0f-caf60d8f8d4f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8j2gh\" (UID: \"b1e6fd37-d069-441e-9c0f-caf60d8f8d4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633732 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633751 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633770 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63674760-f499-49b8-a575-a8ae954eada4-metrics-tls\") pod \"ingress-operator-5b745b69d9-kmhmh\" (UID: \"63674760-f499-49b8-a575-a8ae954eada4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633801 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/13f4a10d-50aa-41ec-9931-cb835ba1f54c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zrsn2\" (UID: \"13f4a10d-50aa-41ec-9931-cb835ba1f54c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zrsn2" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633820 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633841 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ff4j\" (UniqueName: \"kubernetes.io/projected/b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3-kube-api-access-8ff4j\") pod \"package-server-manager-789f6589d5-2nn8b\" (UID: \"b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633868 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5cc06c55-7085-4cc0-8399-833b4243b51e-images\") pod \"machine-config-operator-74547568cd-65ltw\" (UID: \"5cc06c55-7085-4cc0-8399-833b4243b51e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633888 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633908 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd317655-38ea-4fdb-95d0-82adc08456a8-serving-cert\") pod \"openshift-config-operator-7777fb866f-25cmt\" (UID: \"bd317655-38ea-4fdb-95d0-82adc08456a8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633936 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/026a2812-dc11-4b60-b911-bb41a0d39d7d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9lqqd\" (UID: \"026a2812-dc11-4b60-b911-bb41a0d39d7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633961 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e6fd37-d069-441e-9c0f-caf60d8f8d4f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8j2gh\" (UID: \"b1e6fd37-d069-441e-9c0f-caf60d8f8d4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.633994 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a43ce91-6673-4641-a2d6-551afe72688d-webhook-cert\") pod \"packageserver-d55dfcdfc-hmxtq\" (UID: \"6a43ce91-6673-4641-a2d6-551afe72688d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634022 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63674760-f499-49b8-a575-a8ae954eada4-trusted-ca\") pod \"ingress-operator-5b745b69d9-kmhmh\" (UID: \"63674760-f499-49b8-a575-a8ae954eada4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634040 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634088 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5cc06c55-7085-4cc0-8399-833b4243b51e-proxy-tls\") pod \"machine-config-operator-74547568cd-65ltw\" (UID: \"5cc06c55-7085-4cc0-8399-833b4243b51e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634112 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7a0b611-cb8e-431b-b527-b6164471c85f-config\") pod \"kube-controller-manager-operator-78b949d7b-l924n\" (UID: \"b7a0b611-cb8e-431b-b527-b6164471c85f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634136 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bd317655-38ea-4fdb-95d0-82adc08456a8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-25cmt\" (UID: \"bd317655-38ea-4fdb-95d0-82adc08456a8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634170 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7a0b611-cb8e-431b-b527-b6164471c85f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-l924n\" (UID: \"b7a0b611-cb8e-431b-b527-b6164471c85f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634194 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lhl5\" (UniqueName: \"kubernetes.io/projected/76ffdcba-57d6-4636-8373-f088926a716d-kube-api-access-7lhl5\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634224 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634250 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634293 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76ffdcba-57d6-4636-8373-f088926a716d-default-certificate\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634313 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-audit-dir\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634358 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxw8d\" (UniqueName: \"kubernetes.io/projected/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-kube-api-access-zxw8d\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634388 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9hrx\" (UniqueName: \"kubernetes.io/projected/ef2828b3-f501-4105-abc8-6b1ce9658301-kube-api-access-x9hrx\") pod \"catalog-operator-68c6474976-2phqk\" (UID: \"ef2828b3-f501-4105-abc8-6b1ce9658301\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634406 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9xt4\" (UniqueName: \"kubernetes.io/projected/5cc06c55-7085-4cc0-8399-833b4243b51e-kube-api-access-x9xt4\") pod \"machine-config-operator-74547568cd-65ltw\" (UID: \"5cc06c55-7085-4cc0-8399-833b4243b51e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634445 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76ffdcba-57d6-4636-8373-f088926a716d-stats-auth\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634475 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmvtp\" (UniqueName: \"kubernetes.io/projected/6a43ce91-6673-4641-a2d6-551afe72688d-kube-api-access-xmvtp\") pod \"packageserver-d55dfcdfc-hmxtq\" (UID: \"6a43ce91-6673-4641-a2d6-551afe72688d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634496 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76ffdcba-57d6-4636-8373-f088926a716d-service-ca-bundle\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634524 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5cc06c55-7085-4cc0-8399-833b4243b51e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-65ltw\" (UID: \"5cc06c55-7085-4cc0-8399-833b4243b51e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634554 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a2812-dc11-4b60-b911-bb41a0d39d7d-config\") pod \"kube-apiserver-operator-766d6c64bb-9lqqd\" (UID: \"026a2812-dc11-4b60-b911-bb41a0d39d7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634573 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634592 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ef2828b3-f501-4105-abc8-6b1ce9658301-srv-cert\") pod \"catalog-operator-68c6474976-2phqk\" (UID: \"ef2828b3-f501-4105-abc8-6b1ce9658301\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634610 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-audit-policies\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634636 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63674760-f499-49b8-a575-a8ae954eada4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kmhmh\" (UID: \"63674760-f499-49b8-a575-a8ae954eada4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634654 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6a43ce91-6673-4641-a2d6-551afe72688d-tmpfs\") pod \"packageserver-d55dfcdfc-hmxtq\" (UID: \"6a43ce91-6673-4641-a2d6-551afe72688d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634677 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a43ce91-6673-4641-a2d6-551afe72688d-apiservice-cert\") pod \"packageserver-d55dfcdfc-hmxtq\" (UID: \"6a43ce91-6673-4641-a2d6-551afe72688d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634697 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2nn8b\" (UID: \"b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634730 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1e6fd37-d069-441e-9c0f-caf60d8f8d4f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8j2gh\" (UID: \"b1e6fd37-d069-441e-9c0f-caf60d8f8d4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634751 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76ffdcba-57d6-4636-8373-f088926a716d-metrics-certs\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634777 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqq2f\" (UniqueName: \"kubernetes.io/projected/13f4a10d-50aa-41ec-9931-cb835ba1f54c-kube-api-access-zqq2f\") pod \"multus-admission-controller-857f4d67dd-zrsn2\" (UID: \"13f4a10d-50aa-41ec-9931-cb835ba1f54c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zrsn2" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634794 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.635620 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e6fd37-d069-441e-9c0f-caf60d8f8d4f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8j2gh\" (UID: \"b1e6fd37-d069-441e-9c0f-caf60d8f8d4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.635796 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bd317655-38ea-4fdb-95d0-82adc08456a8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-25cmt\" (UID: \"bd317655-38ea-4fdb-95d0-82adc08456a8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.636587 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.636580 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76ffdcba-57d6-4636-8373-f088926a716d-service-ca-bundle\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.634727 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-audit-dir\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.636930 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7a0b611-cb8e-431b-b527-b6164471c85f-config\") pod \"kube-controller-manager-operator-78b949d7b-l924n\" (UID: \"b7a0b611-cb8e-431b-b527-b6164471c85f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.637089 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6a43ce91-6673-4641-a2d6-551afe72688d-tmpfs\") pod \"packageserver-d55dfcdfc-hmxtq\" (UID: \"6a43ce91-6673-4641-a2d6-551afe72688d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.637402 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5cc06c55-7085-4cc0-8399-833b4243b51e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-65ltw\" (UID: \"5cc06c55-7085-4cc0-8399-833b4243b51e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.638419 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a2812-dc11-4b60-b911-bb41a0d39d7d-config\") pod \"kube-apiserver-operator-766d6c64bb-9lqqd\" (UID: \"026a2812-dc11-4b60-b911-bb41a0d39d7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.638855 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/63674760-f499-49b8-a575-a8ae954eada4-metrics-tls\") pod \"ingress-operator-5b745b69d9-kmhmh\" (UID: \"63674760-f499-49b8-a575-a8ae954eada4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.639121 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/026a2812-dc11-4b60-b911-bb41a0d39d7d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9lqqd\" (UID: \"026a2812-dc11-4b60-b911-bb41a0d39d7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.639482 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1e6fd37-d069-441e-9c0f-caf60d8f8d4f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8j2gh\" (UID: \"b1e6fd37-d069-441e-9c0f-caf60d8f8d4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.640061 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76ffdcba-57d6-4636-8373-f088926a716d-default-certificate\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.642065 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2nn8b\" (UID: \"b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.642231 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76ffdcba-57d6-4636-8373-f088926a716d-metrics-certs\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.642378 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7a0b611-cb8e-431b-b527-b6164471c85f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-l924n\" (UID: \"b7a0b611-cb8e-431b-b527-b6164471c85f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.644038 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/13f4a10d-50aa-41ec-9931-cb835ba1f54c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zrsn2\" (UID: \"13f4a10d-50aa-41ec-9931-cb835ba1f54c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zrsn2" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.644815 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76ffdcba-57d6-4636-8373-f088926a716d-stats-auth\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.666521 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.668038 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63674760-f499-49b8-a575-a8ae954eada4-trusted-ca\") pod \"ingress-operator-5b745b69d9-kmhmh\" (UID: \"63674760-f499-49b8-a575-a8ae954eada4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.676433 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.696992 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.717292 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.757230 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.757505 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.768414 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.770108 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.776783 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.787174 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-audit-policies\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.797186 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.805679 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.816281 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.818672 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.843500 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.855801 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.868248 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.876015 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.881656 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.895410 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.904057 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f46b00f-f770-4539-92f9-60e1146308ab-proxy-tls\") pod \"machine-config-controller-84d6567774-f4qrf\" (UID: \"0f46b00f-f770-4539-92f9-60e1146308ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.923303 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.931844 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.937065 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.949359 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.956058 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.970904 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ef2828b3-f501-4105-abc8-6b1ce9658301-profile-collector-cert\") pod \"catalog-operator-68c6474976-2phqk\" (UID: \"ef2828b3-f501-4105-abc8-6b1ce9658301\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.976538 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 15:11:51 crc kubenswrapper[4810]: I0219 15:11:51.996122 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.001469 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ef2828b3-f501-4105-abc8-6b1ce9658301-srv-cert\") pod \"catalog-operator-68c6474976-2phqk\" (UID: \"ef2828b3-f501-4105-abc8-6b1ce9658301\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.015845 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.031204 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.036842 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.058173 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.075457 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.076766 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.091855 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.096210 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.116848 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.131290 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a43ce91-6673-4641-a2d6-551afe72688d-apiservice-cert\") pod \"packageserver-d55dfcdfc-hmxtq\" (UID: \"6a43ce91-6673-4641-a2d6-551afe72688d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.132941 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a43ce91-6673-4641-a2d6-551afe72688d-webhook-cert\") pod \"packageserver-d55dfcdfc-hmxtq\" (UID: \"6a43ce91-6673-4641-a2d6-551afe72688d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.137029 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.156628 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.160674 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd317655-38ea-4fdb-95d0-82adc08456a8-serving-cert\") pod \"openshift-config-operator-7777fb866f-25cmt\" (UID: \"bd317655-38ea-4fdb-95d0-82adc08456a8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.176390 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.196935 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.207864 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5cc06c55-7085-4cc0-8399-833b4243b51e-images\") pod \"machine-config-operator-74547568cd-65ltw\" (UID: \"5cc06c55-7085-4cc0-8399-833b4243b51e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.217580 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.237637 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.252275 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5cc06c55-7085-4cc0-8399-833b4243b51e-proxy-tls\") pod \"machine-config-operator-74547568cd-65ltw\" (UID: \"5cc06c55-7085-4cc0-8399-833b4243b51e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.256233 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.276168 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.296285 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.316856 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.336843 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.356252 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.376655 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.396309 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.413883 4810 request.go:700] Waited for 1.007369853s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.416648 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.437432 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.457149 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.496144 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.498932 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2zrj\" (UniqueName: \"kubernetes.io/projected/a4b3e85d-d02e-4e13-8bab-aa86d2629d85-kube-api-access-j2zrj\") pod \"apiserver-76f77b778f-5d4rp\" (UID: \"a4b3e85d-d02e-4e13-8bab-aa86d2629d85\") " pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.517588 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.536206 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.556181 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.576139 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.597566 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.616645 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.635969 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.656881 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.683292 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.695244 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.697876 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.717306 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.736819 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.756540 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.776271 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.796921 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.817096 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.836025 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.857705 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.876318 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.895303 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.915817 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.931385 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5d4rp"] Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.936904 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 15:11:52 crc kubenswrapper[4810]: W0219 15:11:52.944178 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b3e85d_d02e_4e13_8bab_aa86d2629d85.slice/crio-53df8586a8388b6fc9e9be585e86c564e80a0916443047f532e578facc1131c5 WatchSource:0}: Error finding container 53df8586a8388b6fc9e9be585e86c564e80a0916443047f532e578facc1131c5: Status 404 returned error can't find the container with id 53df8586a8388b6fc9e9be585e86c564e80a0916443047f532e578facc1131c5 Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.965824 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.975485 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 15:11:52 crc kubenswrapper[4810]: I0219 15:11:52.995831 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.016135 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.035628 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.055431 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.121611 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2ff8\" (UniqueName: \"kubernetes.io/projected/e8c9b478-4884-4c32-acf1-5fdec0cfac06-kube-api-access-v2ff8\") pod \"dns-operator-744455d44c-wpzzq\" (UID: \"e8c9b478-4884-4c32-acf1-5fdec0cfac06\") " pod="openshift-dns-operator/dns-operator-744455d44c-wpzzq" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.143773 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c9k2\" (UniqueName: \"kubernetes.io/projected/0f46b00f-f770-4539-92f9-60e1146308ab-kube-api-access-7c9k2\") pod \"machine-config-controller-84d6567774-f4qrf\" (UID: \"0f46b00f-f770-4539-92f9-60e1146308ab\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.157694 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87t8g\" (UniqueName: \"kubernetes.io/projected/0e4ea5aa-2074-4100-a916-6bdfb3331d43-kube-api-access-87t8g\") pod \"cluster-image-registry-operator-dc59b4c8b-xhg7l\" (UID: \"0e4ea5aa-2074-4100-a916-6bdfb3331d43\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.169887 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr7hk\" (UniqueName: \"kubernetes.io/projected/f7aa90a6-963b-4ff0-b6de-ad46fa896e18-kube-api-access-kr7hk\") pod \"openshift-apiserver-operator-796bbdcf4f-55x9c\" (UID: \"f7aa90a6-963b-4ff0-b6de-ad46fa896e18\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.188616 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gzjq\" (UniqueName: \"kubernetes.io/projected/239280e2-b335-4f87-89a8-00cb6f8e3c69-kube-api-access-8gzjq\") pod \"authentication-operator-69f744f599-jm2nk\" (UID: \"239280e2-b335-4f87-89a8-00cb6f8e3c69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.202193 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wpzzq" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.216079 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4sn5\" (UniqueName: \"kubernetes.io/projected/fee373cf-50b8-42f4-b30d-4a3d230ca27e-kube-api-access-v4sn5\") pod \"etcd-operator-b45778765-d8pqg\" (UID: \"fee373cf-50b8-42f4-b30d-4a3d230ca27e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.227824 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0e4ea5aa-2074-4100-a916-6bdfb3331d43-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xhg7l\" (UID: \"0e4ea5aa-2074-4100-a916-6bdfb3331d43\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.251399 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l478x\" (UniqueName: \"kubernetes.io/projected/ad43df2c-4944-45e2-919f-0c297f4092d4-kube-api-access-l478x\") pod \"route-controller-manager-6576b87f9c-n8zth\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.265885 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.270034 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.270612 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hxq2\" (UniqueName: \"kubernetes.io/projected/c696fe96-0485-44d0-b4fb-161503c334e8-kube-api-access-9hxq2\") pod \"apiserver-7bbb656c7d-q6zwc\" (UID: \"c696fe96-0485-44d0-b4fb-161503c334e8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.292217 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.303285 4810 generic.go:334] "Generic (PLEG): container finished" podID="a4b3e85d-d02e-4e13-8bab-aa86d2629d85" containerID="e1298251b6d3f264c15b9e63f0ef24620188b4f18194f013ad69b9aa3b66a9fa" exitCode=0 Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.303365 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" event={"ID":"a4b3e85d-d02e-4e13-8bab-aa86d2629d85","Type":"ContainerDied","Data":"e1298251b6d3f264c15b9e63f0ef24620188b4f18194f013ad69b9aa3b66a9fa"} Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.303401 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" event={"ID":"a4b3e85d-d02e-4e13-8bab-aa86d2629d85","Type":"ContainerStarted","Data":"53df8586a8388b6fc9e9be585e86c564e80a0916443047f532e578facc1131c5"} Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.305741 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x4fb\" (UniqueName: \"kubernetes.io/projected/9a7776ca-1a56-4eca-9e44-ba1b7b15510f-kube-api-access-7x4fb\") pod \"machine-api-operator-5694c8668f-l66cb\" (UID: \"9a7776ca-1a56-4eca-9e44-ba1b7b15510f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.311578 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnqb7\" (UniqueName: \"kubernetes.io/projected/362cd55c-b576-44bd-843c-078bf26b3b1e-kube-api-access-pnqb7\") pod \"console-f9d7485db-4hddt\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.319723 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.339950 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgcbs\" (UniqueName: \"kubernetes.io/projected/8f4b9328-0efe-42a8-9a73-a80eb6a26151-kube-api-access-hgcbs\") pod \"openshift-controller-manager-operator-756b6f6bc6-kclsk\" (UID: \"8f4b9328-0efe-42a8-9a73-a80eb6a26151\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.357028 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mbjx\" (UniqueName: \"kubernetes.io/projected/7a29951a-027e-49b4-a7ea-a8e363942414-kube-api-access-2mbjx\") pod \"downloads-7954f5f757-dkppn\" (UID: \"7a29951a-027e-49b4-a7ea-a8e363942414\") " pod="openshift-console/downloads-7954f5f757-dkppn" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.370867 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.371468 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh2sf\" (UniqueName: \"kubernetes.io/projected/77544608-a940-4c0c-9a1a-a5a98f480134-kube-api-access-jh2sf\") pod \"machine-approver-56656f9798-8mxdc\" (UID: \"77544608-a940-4c0c-9a1a-a5a98f480134\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.380595 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.380688 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wpzzq"] Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.389783 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.392981 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k7k5\" (UniqueName: \"kubernetes.io/projected/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-kube-api-access-8k7k5\") pod \"controller-manager-879f6c89f-sl5p9\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.411554 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh895\" (UniqueName: \"kubernetes.io/projected/25e05c8a-335b-405c-9033-f689c21c5ecc-kube-api-access-jh895\") pod \"cluster-samples-operator-665b6dd947-xhzlb\" (UID: \"25e05c8a-335b-405c-9033-f689c21c5ecc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.414319 4810 request.go:700] Waited for 1.87568074s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.415436 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.437768 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.439814 4810 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.448622 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.456238 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.476729 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.499998 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.516043 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l"] Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.526107 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.537745 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.542201 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc"] Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.544220 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.545088 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.557276 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.575561 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.576893 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d8pqg"] Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.578099 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:11:53 crc kubenswrapper[4810]: W0219 15:11:53.579302 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc696fe96_0485_44d0_b4fb_161503c334e8.slice/crio-4cfc3c3a4162660b44696564144790aac4145b83976abe219d6df4b4502d2dc6 WatchSource:0}: Error finding container 4cfc3c3a4162660b44696564144790aac4145b83976abe219d6df4b4502d2dc6: Status 404 returned error can't find the container with id 4cfc3c3a4162660b44696564144790aac4145b83976abe219d6df4b4502d2dc6 Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.585012 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dkppn" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.595831 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.604921 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-l66cb"] Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.645142 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.659054 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbnnp\" (UniqueName: \"kubernetes.io/projected/bd317655-38ea-4fdb-95d0-82adc08456a8-kube-api-access-dbnnp\") pod \"openshift-config-operator-7777fb866f-25cmt\" (UID: \"bd317655-38ea-4fdb-95d0-82adc08456a8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.675771 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/026a2812-dc11-4b60-b911-bb41a0d39d7d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9lqqd\" (UID: \"026a2812-dc11-4b60-b911-bb41a0d39d7d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.686965 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk"] Feb 19 15:11:53 crc kubenswrapper[4810]: W0219 15:11:53.690713 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfee373cf_50b8_42f4_b30d_4a3d230ca27e.slice/crio-3989d96419fba09da07c9e99ef43b0eb4df43c01359b3c17be42be6e9e26f30f WatchSource:0}: Error finding container 3989d96419fba09da07c9e99ef43b0eb4df43c01359b3c17be42be6e9e26f30f: Status 404 returned error can't find the container with id 3989d96419fba09da07c9e99ef43b0eb4df43c01359b3c17be42be6e9e26f30f Feb 19 15:11:53 crc kubenswrapper[4810]: W0219 15:11:53.696037 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a7776ca_1a56_4eca_9e44_ba1b7b15510f.slice/crio-b87f6328992f13d54a1ed3e925e1d3835339d68ebe81ed0173553e621b7c78da WatchSource:0}: Error finding container b87f6328992f13d54a1ed3e925e1d3835339d68ebe81ed0173553e621b7c78da: Status 404 returned error can't find the container with id b87f6328992f13d54a1ed3e925e1d3835339d68ebe81ed0173553e621b7c78da Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.700893 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2f48\" (UniqueName: \"kubernetes.io/projected/63674760-f499-49b8-a575-a8ae954eada4-kube-api-access-d2f48\") pod \"ingress-operator-5b745b69d9-kmhmh\" (UID: \"63674760-f499-49b8-a575-a8ae954eada4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.704083 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.707404 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf"] Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.720063 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ff4j\" (UniqueName: \"kubernetes.io/projected/b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3-kube-api-access-8ff4j\") pod \"package-server-manager-789f6589d5-2nn8b\" (UID: \"b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" Feb 19 15:11:53 crc kubenswrapper[4810]: W0219 15:11:53.724495 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f4b9328_0efe_42a8_9a73_a80eb6a26151.slice/crio-724adfe31aa7fad7a5da317cec629a9441c15ea118c52e1a602faab0955fb919 WatchSource:0}: Error finding container 724adfe31aa7fad7a5da317cec629a9441c15ea118c52e1a602faab0955fb919: Status 404 returned error can't find the container with id 724adfe31aa7fad7a5da317cec629a9441c15ea118c52e1a602faab0955fb919 Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.733967 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxw8d\" (UniqueName: \"kubernetes.io/projected/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-kube-api-access-zxw8d\") pod \"oauth-openshift-558db77b4-r74mv\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.763510 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9hrx\" (UniqueName: \"kubernetes.io/projected/ef2828b3-f501-4105-abc8-6b1ce9658301-kube-api-access-x9hrx\") pod \"catalog-operator-68c6474976-2phqk\" (UID: \"ef2828b3-f501-4105-abc8-6b1ce9658301\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.765409 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jm2nk"] Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.780583 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9xt4\" (UniqueName: \"kubernetes.io/projected/5cc06c55-7085-4cc0-8399-833b4243b51e-kube-api-access-x9xt4\") pod \"machine-config-operator-74547568cd-65ltw\" (UID: \"5cc06c55-7085-4cc0-8399-833b4243b51e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.792241 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63674760-f499-49b8-a575-a8ae954eada4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kmhmh\" (UID: \"63674760-f499-49b8-a575-a8ae954eada4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:53 crc kubenswrapper[4810]: W0219 15:11:53.799196 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod239280e2_b335_4f87_89a8_00cb6f8e3c69.slice/crio-f40aa4e7f50582e05ed4f854cfdd01edd619d078425d9ec703efac5568e62189 WatchSource:0}: Error finding container f40aa4e7f50582e05ed4f854cfdd01edd619d078425d9ec703efac5568e62189: Status 404 returned error can't find the container with id f40aa4e7f50582e05ed4f854cfdd01edd619d078425d9ec703efac5568e62189 Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.828413 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmvtp\" (UniqueName: \"kubernetes.io/projected/6a43ce91-6673-4641-a2d6-551afe72688d-kube-api-access-xmvtp\") pod \"packageserver-d55dfcdfc-hmxtq\" (UID: \"6a43ce91-6673-4641-a2d6-551afe72688d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.842241 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7a0b611-cb8e-431b-b527-b6164471c85f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-l924n\" (UID: \"b7a0b611-cb8e-431b-b527-b6164471c85f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.844851 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c"] Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.865510 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lhl5\" (UniqueName: \"kubernetes.io/projected/76ffdcba-57d6-4636-8373-f088926a716d-kube-api-access-7lhl5\") pod \"router-default-5444994796-hvw7f\" (UID: \"76ffdcba-57d6-4636-8373-f088926a716d\") " pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.891970 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1e6fd37-d069-441e-9c0f-caf60d8f8d4f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8j2gh\" (UID: \"b1e6fd37-d069-441e-9c0f-caf60d8f8d4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.905978 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqq2f\" (UniqueName: \"kubernetes.io/projected/13f4a10d-50aa-41ec-9931-cb835ba1f54c-kube-api-access-zqq2f\") pod \"multus-admission-controller-857f4d67dd-zrsn2\" (UID: \"13f4a10d-50aa-41ec-9931-cb835ba1f54c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zrsn2" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.914901 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.915738 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.947799 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.949617 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.953537 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4hddt"] Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.956906 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.965261 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zrsn2" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.971117 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g96x2\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-kube-api-access-g96x2\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.971168 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/969b9cb6-f89f-47e9-b8f7-754804a41dea-serving-cert\") pod \"console-operator-58897d9998-h68pj\" (UID: \"969b9cb6-f89f-47e9-b8f7-754804a41dea\") " pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.971196 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a6cb5092-5f01-4dd9-a940-804d88907744-registry-certificates\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.971667 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.971730 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r2tqm\" (UID: \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.971860 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d3068e5d-4ff8-438f-958b-f0f90e773ca1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mkt5x\" (UID: \"d3068e5d-4ff8-438f-958b-f0f90e773ca1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.971902 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a6cb5092-5f01-4dd9-a940-804d88907744-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.972623 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/969b9cb6-f89f-47e9-b8f7-754804a41dea-trusted-ca\") pod \"console-operator-58897d9998-h68pj\" (UID: \"969b9cb6-f89f-47e9-b8f7-754804a41dea\") " pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.972722 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/74295390-5384-402d-8c5b-dc2559bb6d9c-signing-key\") pod \"service-ca-9c57cc56f-frmnw\" (UID: \"74295390-5384-402d-8c5b-dc2559bb6d9c\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.972759 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-registry-tls\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.972805 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d9ff005-2d60-4eb5-b8fa-59b84661617f-config-volume\") pod \"dns-default-zgcrd\" (UID: \"9d9ff005-2d60-4eb5-b8fa-59b84661617f\") " pod="openshift-dns/dns-default-zgcrd" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.972879 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7e5d88-e6b7-416c-abbd-eed95cc772de-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwvxk\" (UID: \"0c7e5d88-e6b7-416c-abbd-eed95cc772de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.972907 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6cb5092-5f01-4dd9-a940-804d88907744-trusted-ca\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.972929 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a6cb5092-5f01-4dd9-a940-804d88907744-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973001 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b8f83b3-fabb-4404-88ce-e64c4db8a568-config\") pod \"service-ca-operator-777779d784-n6fl9\" (UID: \"9b8f83b3-fabb-4404-88ce-e64c4db8a568\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973061 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxtvx\" (UniqueName: \"kubernetes.io/projected/d3068e5d-4ff8-438f-958b-f0f90e773ca1-kube-api-access-sxtvx\") pod \"olm-operator-6b444d44fb-mkt5x\" (UID: \"d3068e5d-4ff8-438f-958b-f0f90e773ca1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973165 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-bound-sa-token\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973227 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b8f83b3-fabb-4404-88ce-e64c4db8a568-serving-cert\") pod \"service-ca-operator-777779d784-n6fl9\" (UID: \"9b8f83b3-fabb-4404-88ce-e64c4db8a568\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973260 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/969b9cb6-f89f-47e9-b8f7-754804a41dea-config\") pod \"console-operator-58897d9998-h68pj\" (UID: \"969b9cb6-f89f-47e9-b8f7-754804a41dea\") " pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973291 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7e5d88-e6b7-416c-abbd-eed95cc772de-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwvxk\" (UID: \"0c7e5d88-e6b7-416c-abbd-eed95cc772de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973361 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfppl\" (UniqueName: \"kubernetes.io/projected/9d9ff005-2d60-4eb5-b8fa-59b84661617f-kube-api-access-dfppl\") pod \"dns-default-zgcrd\" (UID: \"9d9ff005-2d60-4eb5-b8fa-59b84661617f\") " pod="openshift-dns/dns-default-zgcrd" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973396 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45klr\" (UniqueName: \"kubernetes.io/projected/e3f898c2-3784-4157-b1c5-fdadcaf69bef-kube-api-access-45klr\") pod \"migrator-59844c95c7-w6k69\" (UID: \"e3f898c2-3784-4157-b1c5-fdadcaf69bef\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w6k69" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973681 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/64dc0d58-11d4-456b-97ab-a4d3ec28225b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-895xv\" (UID: \"64dc0d58-11d4-456b-97ab-a4d3ec28225b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973705 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdm7r\" (UniqueName: \"kubernetes.io/projected/8383a9e3-149b-4512-a9fd-12cd0b65e370-kube-api-access-jdm7r\") pod \"collect-profiles-29525220-wqnml\" (UID: \"8383a9e3-149b-4512-a9fd-12cd0b65e370\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973747 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hdzd\" (UniqueName: \"kubernetes.io/projected/969b9cb6-f89f-47e9-b8f7-754804a41dea-kube-api-access-4hdzd\") pod \"console-operator-58897d9998-h68pj\" (UID: \"969b9cb6-f89f-47e9-b8f7-754804a41dea\") " pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973865 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rrc6\" (UniqueName: \"kubernetes.io/projected/74295390-5384-402d-8c5b-dc2559bb6d9c-kube-api-access-8rrc6\") pod \"service-ca-9c57cc56f-frmnw\" (UID: \"74295390-5384-402d-8c5b-dc2559bb6d9c\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973936 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973958 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8mnc\" (UniqueName: \"kubernetes.io/projected/0c7e5d88-e6b7-416c-abbd-eed95cc772de-kube-api-access-z8mnc\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwvxk\" (UID: \"0c7e5d88-e6b7-416c-abbd-eed95cc772de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.973978 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x6lv\" (UniqueName: \"kubernetes.io/projected/64dc0d58-11d4-456b-97ab-a4d3ec28225b-kube-api-access-6x6lv\") pod \"control-plane-machine-set-operator-78cbb6b69f-895xv\" (UID: \"64dc0d58-11d4-456b-97ab-a4d3ec28225b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.974028 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9vz7\" (UniqueName: \"kubernetes.io/projected/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-kube-api-access-q9vz7\") pod \"marketplace-operator-79b997595-r2tqm\" (UID: \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.974049 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d9ff005-2d60-4eb5-b8fa-59b84661617f-metrics-tls\") pod \"dns-default-zgcrd\" (UID: \"9d9ff005-2d60-4eb5-b8fa-59b84661617f\") " pod="openshift-dns/dns-default-zgcrd" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.974199 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/74295390-5384-402d-8c5b-dc2559bb6d9c-signing-cabundle\") pod \"service-ca-9c57cc56f-frmnw\" (UID: \"74295390-5384-402d-8c5b-dc2559bb6d9c\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.974237 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8383a9e3-149b-4512-a9fd-12cd0b65e370-config-volume\") pod \"collect-profiles-29525220-wqnml\" (UID: \"8383a9e3-149b-4512-a9fd-12cd0b65e370\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.974263 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8383a9e3-149b-4512-a9fd-12cd0b65e370-secret-volume\") pod \"collect-profiles-29525220-wqnml\" (UID: \"8383a9e3-149b-4512-a9fd-12cd0b65e370\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.974353 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r2tqm\" (UID: \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.974388 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncbgj\" (UniqueName: \"kubernetes.io/projected/9b8f83b3-fabb-4404-88ce-e64c4db8a568-kube-api-access-ncbgj\") pod \"service-ca-operator-777779d784-n6fl9\" (UID: \"9b8f83b3-fabb-4404-88ce-e64c4db8a568\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.974503 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d3068e5d-4ff8-438f-958b-f0f90e773ca1-srv-cert\") pod \"olm-operator-6b444d44fb-mkt5x\" (UID: \"d3068e5d-4ff8-438f-958b-f0f90e773ca1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" Feb 19 15:11:53 crc kubenswrapper[4810]: E0219 15:11:53.977905 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:54.477888213 +0000 UTC m=+143.959918337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:53 crc kubenswrapper[4810]: I0219 15:11:53.978968 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.006185 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.010222 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.024191 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.075804 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:54 crc kubenswrapper[4810]: E0219 15:11:54.076206 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:54.576169417 +0000 UTC m=+144.058199541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.076932 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d3068e5d-4ff8-438f-958b-f0f90e773ca1-srv-cert\") pod \"olm-operator-6b444d44fb-mkt5x\" (UID: \"d3068e5d-4ff8-438f-958b-f0f90e773ca1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.076971 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g96x2\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-kube-api-access-g96x2\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.076999 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a6cb5092-5f01-4dd9-a940-804d88907744-registry-certificates\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077027 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/969b9cb6-f89f-47e9-b8f7-754804a41dea-serving-cert\") pod \"console-operator-58897d9998-h68pj\" (UID: \"969b9cb6-f89f-47e9-b8f7-754804a41dea\") " pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077057 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/59eb54a2-ac34-429b-a275-7d365be9ad2f-node-bootstrap-token\") pod \"machine-config-server-hlw9s\" (UID: \"59eb54a2-ac34-429b-a275-7d365be9ad2f\") " pod="openshift-machine-config-operator/machine-config-server-hlw9s" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077082 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r2tqm\" (UID: \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077100 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cbdn\" (UniqueName: \"kubernetes.io/projected/59eb54a2-ac34-429b-a275-7d365be9ad2f-kube-api-access-2cbdn\") pod \"machine-config-server-hlw9s\" (UID: \"59eb54a2-ac34-429b-a275-7d365be9ad2f\") " pod="openshift-machine-config-operator/machine-config-server-hlw9s" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077128 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d3068e5d-4ff8-438f-958b-f0f90e773ca1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mkt5x\" (UID: \"d3068e5d-4ff8-438f-958b-f0f90e773ca1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077145 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-mountpoint-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077162 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a6cb5092-5f01-4dd9-a940-804d88907744-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077179 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/969b9cb6-f89f-47e9-b8f7-754804a41dea-trusted-ca\") pod \"console-operator-58897d9998-h68pj\" (UID: \"969b9cb6-f89f-47e9-b8f7-754804a41dea\") " pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077198 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/74295390-5384-402d-8c5b-dc2559bb6d9c-signing-key\") pod \"service-ca-9c57cc56f-frmnw\" (UID: \"74295390-5384-402d-8c5b-dc2559bb6d9c\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077216 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-registry-tls\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077235 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d9ff005-2d60-4eb5-b8fa-59b84661617f-config-volume\") pod \"dns-default-zgcrd\" (UID: \"9d9ff005-2d60-4eb5-b8fa-59b84661617f\") " pod="openshift-dns/dns-default-zgcrd" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077259 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6cb5092-5f01-4dd9-a940-804d88907744-trusted-ca\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077278 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7e5d88-e6b7-416c-abbd-eed95cc772de-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwvxk\" (UID: \"0c7e5d88-e6b7-416c-abbd-eed95cc772de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077296 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a6cb5092-5f01-4dd9-a940-804d88907744-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077335 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b8f83b3-fabb-4404-88ce-e64c4db8a568-config\") pod \"service-ca-operator-777779d784-n6fl9\" (UID: \"9b8f83b3-fabb-4404-88ce-e64c4db8a568\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077355 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxtvx\" (UniqueName: \"kubernetes.io/projected/d3068e5d-4ff8-438f-958b-f0f90e773ca1-kube-api-access-sxtvx\") pod \"olm-operator-6b444d44fb-mkt5x\" (UID: \"d3068e5d-4ff8-438f-958b-f0f90e773ca1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077378 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9cg2\" (UniqueName: \"kubernetes.io/projected/500c6ae7-fd1f-4af6-ad6c-f1db0c2af222-kube-api-access-m9cg2\") pod \"ingress-canary-cdfxh\" (UID: \"500c6ae7-fd1f-4af6-ad6c-f1db0c2af222\") " pod="openshift-ingress-canary/ingress-canary-cdfxh" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077397 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jld4k\" (UniqueName: \"kubernetes.io/projected/51a1f271-446d-42d2-b946-ad816257e990-kube-api-access-jld4k\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077418 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-bound-sa-token\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077438 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b8f83b3-fabb-4404-88ce-e64c4db8a568-serving-cert\") pod \"service-ca-operator-777779d784-n6fl9\" (UID: \"9b8f83b3-fabb-4404-88ce-e64c4db8a568\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077467 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/969b9cb6-f89f-47e9-b8f7-754804a41dea-config\") pod \"console-operator-58897d9998-h68pj\" (UID: \"969b9cb6-f89f-47e9-b8f7-754804a41dea\") " pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077490 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7e5d88-e6b7-416c-abbd-eed95cc772de-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwvxk\" (UID: \"0c7e5d88-e6b7-416c-abbd-eed95cc772de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077512 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45klr\" (UniqueName: \"kubernetes.io/projected/e3f898c2-3784-4157-b1c5-fdadcaf69bef-kube-api-access-45klr\") pod \"migrator-59844c95c7-w6k69\" (UID: \"e3f898c2-3784-4157-b1c5-fdadcaf69bef\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w6k69" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077533 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfppl\" (UniqueName: \"kubernetes.io/projected/9d9ff005-2d60-4eb5-b8fa-59b84661617f-kube-api-access-dfppl\") pod \"dns-default-zgcrd\" (UID: \"9d9ff005-2d60-4eb5-b8fa-59b84661617f\") " pod="openshift-dns/dns-default-zgcrd" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077555 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/64dc0d58-11d4-456b-97ab-a4d3ec28225b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-895xv\" (UID: \"64dc0d58-11d4-456b-97ab-a4d3ec28225b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077576 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-socket-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077597 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-plugins-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077617 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdm7r\" (UniqueName: \"kubernetes.io/projected/8383a9e3-149b-4512-a9fd-12cd0b65e370-kube-api-access-jdm7r\") pod \"collect-profiles-29525220-wqnml\" (UID: \"8383a9e3-149b-4512-a9fd-12cd0b65e370\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077635 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-registration-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077658 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hdzd\" (UniqueName: \"kubernetes.io/projected/969b9cb6-f89f-47e9-b8f7-754804a41dea-kube-api-access-4hdzd\") pod \"console-operator-58897d9998-h68pj\" (UID: \"969b9cb6-f89f-47e9-b8f7-754804a41dea\") " pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077677 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rrc6\" (UniqueName: \"kubernetes.io/projected/74295390-5384-402d-8c5b-dc2559bb6d9c-kube-api-access-8rrc6\") pod \"service-ca-9c57cc56f-frmnw\" (UID: \"74295390-5384-402d-8c5b-dc2559bb6d9c\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077695 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/500c6ae7-fd1f-4af6-ad6c-f1db0c2af222-cert\") pod \"ingress-canary-cdfxh\" (UID: \"500c6ae7-fd1f-4af6-ad6c-f1db0c2af222\") " pod="openshift-ingress-canary/ingress-canary-cdfxh" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.077728 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8mnc\" (UniqueName: \"kubernetes.io/projected/0c7e5d88-e6b7-416c-abbd-eed95cc772de-kube-api-access-z8mnc\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwvxk\" (UID: \"0c7e5d88-e6b7-416c-abbd-eed95cc772de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.078359 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x6lv\" (UniqueName: \"kubernetes.io/projected/64dc0d58-11d4-456b-97ab-a4d3ec28225b-kube-api-access-6x6lv\") pod \"control-plane-machine-set-operator-78cbb6b69f-895xv\" (UID: \"64dc0d58-11d4-456b-97ab-a4d3ec28225b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.078381 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.078404 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9vz7\" (UniqueName: \"kubernetes.io/projected/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-kube-api-access-q9vz7\") pod \"marketplace-operator-79b997595-r2tqm\" (UID: \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.078421 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d9ff005-2d60-4eb5-b8fa-59b84661617f-metrics-tls\") pod \"dns-default-zgcrd\" (UID: \"9d9ff005-2d60-4eb5-b8fa-59b84661617f\") " pod="openshift-dns/dns-default-zgcrd" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.078448 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/59eb54a2-ac34-429b-a275-7d365be9ad2f-certs\") pod \"machine-config-server-hlw9s\" (UID: \"59eb54a2-ac34-429b-a275-7d365be9ad2f\") " pod="openshift-machine-config-operator/machine-config-server-hlw9s" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.078476 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/74295390-5384-402d-8c5b-dc2559bb6d9c-signing-cabundle\") pod \"service-ca-9c57cc56f-frmnw\" (UID: \"74295390-5384-402d-8c5b-dc2559bb6d9c\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.078505 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8383a9e3-149b-4512-a9fd-12cd0b65e370-config-volume\") pod \"collect-profiles-29525220-wqnml\" (UID: \"8383a9e3-149b-4512-a9fd-12cd0b65e370\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.078523 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8383a9e3-149b-4512-a9fd-12cd0b65e370-secret-volume\") pod \"collect-profiles-29525220-wqnml\" (UID: \"8383a9e3-149b-4512-a9fd-12cd0b65e370\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.078542 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r2tqm\" (UID: \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.078561 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncbgj\" (UniqueName: \"kubernetes.io/projected/9b8f83b3-fabb-4404-88ce-e64c4db8a568-kube-api-access-ncbgj\") pod \"service-ca-operator-777779d784-n6fl9\" (UID: \"9b8f83b3-fabb-4404-88ce-e64c4db8a568\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.078581 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-csi-data-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.078394 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d9ff005-2d60-4eb5-b8fa-59b84661617f-config-volume\") pod \"dns-default-zgcrd\" (UID: \"9d9ff005-2d60-4eb5-b8fa-59b84661617f\") " pod="openshift-dns/dns-default-zgcrd" Feb 19 15:11:54 crc kubenswrapper[4810]: E0219 15:11:54.079786 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:54.579774076 +0000 UTC m=+144.061804200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.081474 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a6cb5092-5f01-4dd9-a940-804d88907744-registry-certificates\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.081626 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6cb5092-5f01-4dd9-a940-804d88907744-trusted-ca\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.082592 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r2tqm\" (UID: \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.083347 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/74295390-5384-402d-8c5b-dc2559bb6d9c-signing-cabundle\") pod \"service-ca-9c57cc56f-frmnw\" (UID: \"74295390-5384-402d-8c5b-dc2559bb6d9c\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.084922 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a6cb5092-5f01-4dd9-a940-804d88907744-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.084988 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b8f83b3-fabb-4404-88ce-e64c4db8a568-serving-cert\") pod \"service-ca-operator-777779d784-n6fl9\" (UID: \"9b8f83b3-fabb-4404-88ce-e64c4db8a568\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.085635 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b8f83b3-fabb-4404-88ce-e64c4db8a568-config\") pod \"service-ca-operator-777779d784-n6fl9\" (UID: \"9b8f83b3-fabb-4404-88ce-e64c4db8a568\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.085873 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8383a9e3-149b-4512-a9fd-12cd0b65e370-config-volume\") pod \"collect-profiles-29525220-wqnml\" (UID: \"8383a9e3-149b-4512-a9fd-12cd0b65e370\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.086132 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a6cb5092-5f01-4dd9-a940-804d88907744-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.087125 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/969b9cb6-f89f-47e9-b8f7-754804a41dea-trusted-ca\") pod \"console-operator-58897d9998-h68pj\" (UID: \"969b9cb6-f89f-47e9-b8f7-754804a41dea\") " pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.087381 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8383a9e3-149b-4512-a9fd-12cd0b65e370-secret-volume\") pod \"collect-profiles-29525220-wqnml\" (UID: \"8383a9e3-149b-4512-a9fd-12cd0b65e370\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.087830 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/969b9cb6-f89f-47e9-b8f7-754804a41dea-config\") pod \"console-operator-58897d9998-h68pj\" (UID: \"969b9cb6-f89f-47e9-b8f7-754804a41dea\") " pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.087937 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7e5d88-e6b7-416c-abbd-eed95cc772de-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwvxk\" (UID: \"0c7e5d88-e6b7-416c-abbd-eed95cc772de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.088516 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d9ff005-2d60-4eb5-b8fa-59b84661617f-metrics-tls\") pod \"dns-default-zgcrd\" (UID: \"9d9ff005-2d60-4eb5-b8fa-59b84661617f\") " pod="openshift-dns/dns-default-zgcrd" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.089199 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/969b9cb6-f89f-47e9-b8f7-754804a41dea-serving-cert\") pod \"console-operator-58897d9998-h68pj\" (UID: \"969b9cb6-f89f-47e9-b8f7-754804a41dea\") " pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.091465 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/64dc0d58-11d4-456b-97ab-a4d3ec28225b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-895xv\" (UID: \"64dc0d58-11d4-456b-97ab-a4d3ec28225b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.092053 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7e5d88-e6b7-416c-abbd-eed95cc772de-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwvxk\" (UID: \"0c7e5d88-e6b7-416c-abbd-eed95cc772de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.092083 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d3068e5d-4ff8-438f-958b-f0f90e773ca1-srv-cert\") pod \"olm-operator-6b444d44fb-mkt5x\" (UID: \"d3068e5d-4ff8-438f-958b-f0f90e773ca1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.094112 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d3068e5d-4ff8-438f-958b-f0f90e773ca1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mkt5x\" (UID: \"d3068e5d-4ff8-438f-958b-f0f90e773ca1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.095020 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-registry-tls\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.103463 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r2tqm\" (UID: \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.103712 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/74295390-5384-402d-8c5b-dc2559bb6d9c-signing-key\") pod \"service-ca-9c57cc56f-frmnw\" (UID: \"74295390-5384-402d-8c5b-dc2559bb6d9c\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.116215 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdm7r\" (UniqueName: \"kubernetes.io/projected/8383a9e3-149b-4512-a9fd-12cd0b65e370-kube-api-access-jdm7r\") pod \"collect-profiles-29525220-wqnml\" (UID: \"8383a9e3-149b-4512-a9fd-12cd0b65e370\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.164403 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9vz7\" (UniqueName: \"kubernetes.io/projected/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-kube-api-access-q9vz7\") pod \"marketplace-operator-79b997595-r2tqm\" (UID: \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\") " pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.180649 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:54 crc kubenswrapper[4810]: E0219 15:11:54.180911 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:54.680876427 +0000 UTC m=+144.162906541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.181078 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-socket-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.181103 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-plugins-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.181123 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-registration-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.181165 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/500c6ae7-fd1f-4af6-ad6c-f1db0c2af222-cert\") pod \"ingress-canary-cdfxh\" (UID: \"500c6ae7-fd1f-4af6-ad6c-f1db0c2af222\") " pod="openshift-ingress-canary/ingress-canary-cdfxh" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.181210 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.181266 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/59eb54a2-ac34-429b-a275-7d365be9ad2f-certs\") pod \"machine-config-server-hlw9s\" (UID: \"59eb54a2-ac34-429b-a275-7d365be9ad2f\") " pod="openshift-machine-config-operator/machine-config-server-hlw9s" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.181644 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-csi-data-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.181719 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/59eb54a2-ac34-429b-a275-7d365be9ad2f-node-bootstrap-token\") pod \"machine-config-server-hlw9s\" (UID: \"59eb54a2-ac34-429b-a275-7d365be9ad2f\") " pod="openshift-machine-config-operator/machine-config-server-hlw9s" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.181768 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cbdn\" (UniqueName: \"kubernetes.io/projected/59eb54a2-ac34-429b-a275-7d365be9ad2f-kube-api-access-2cbdn\") pod \"machine-config-server-hlw9s\" (UID: \"59eb54a2-ac34-429b-a275-7d365be9ad2f\") " pod="openshift-machine-config-operator/machine-config-server-hlw9s" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.181801 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-mountpoint-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.181814 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-registration-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.181899 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-mountpoint-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.181982 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9cg2\" (UniqueName: \"kubernetes.io/projected/500c6ae7-fd1f-4af6-ad6c-f1db0c2af222-kube-api-access-m9cg2\") pod \"ingress-canary-cdfxh\" (UID: \"500c6ae7-fd1f-4af6-ad6c-f1db0c2af222\") " pod="openshift-ingress-canary/ingress-canary-cdfxh" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.182043 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jld4k\" (UniqueName: \"kubernetes.io/projected/51a1f271-446d-42d2-b946-ad816257e990-kube-api-access-jld4k\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: E0219 15:11:54.182103 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:54.68208335 +0000 UTC m=+144.164113634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.182708 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-socket-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.182748 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-plugins-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.183503 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/51a1f271-446d-42d2-b946-ad816257e990-csi-data-dir\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.187631 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g96x2\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-kube-api-access-g96x2\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.201954 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/500c6ae7-fd1f-4af6-ad6c-f1db0c2af222-cert\") pod \"ingress-canary-cdfxh\" (UID: \"500c6ae7-fd1f-4af6-ad6c-f1db0c2af222\") " pod="openshift-ingress-canary/ingress-canary-cdfxh" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.211163 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/59eb54a2-ac34-429b-a275-7d365be9ad2f-certs\") pod \"machine-config-server-hlw9s\" (UID: \"59eb54a2-ac34-429b-a275-7d365be9ad2f\") " pod="openshift-machine-config-operator/machine-config-server-hlw9s" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.211565 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/59eb54a2-ac34-429b-a275-7d365be9ad2f-node-bootstrap-token\") pod \"machine-config-server-hlw9s\" (UID: \"59eb54a2-ac34-429b-a275-7d365be9ad2f\") " pod="openshift-machine-config-operator/machine-config-server-hlw9s" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.225040 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hdzd\" (UniqueName: \"kubernetes.io/projected/969b9cb6-f89f-47e9-b8f7-754804a41dea-kube-api-access-4hdzd\") pod \"console-operator-58897d9998-h68pj\" (UID: \"969b9cb6-f89f-47e9-b8f7-754804a41dea\") " pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.227183 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rrc6\" (UniqueName: \"kubernetes.io/projected/74295390-5384-402d-8c5b-dc2559bb6d9c-kube-api-access-8rrc6\") pod \"service-ca-9c57cc56f-frmnw\" (UID: \"74295390-5384-402d-8c5b-dc2559bb6d9c\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.230124 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8mnc\" (UniqueName: \"kubernetes.io/projected/0c7e5d88-e6b7-416c-abbd-eed95cc772de-kube-api-access-z8mnc\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwvxk\" (UID: \"0c7e5d88-e6b7-416c-abbd-eed95cc772de\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.254825 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x6lv\" (UniqueName: \"kubernetes.io/projected/64dc0d58-11d4-456b-97ab-a4d3ec28225b-kube-api-access-6x6lv\") pod \"control-plane-machine-set-operator-78cbb6b69f-895xv\" (UID: \"64dc0d58-11d4-456b-97ab-a4d3ec28225b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.284306 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxtvx\" (UniqueName: \"kubernetes.io/projected/d3068e5d-4ff8-438f-958b-f0f90e773ca1-kube-api-access-sxtvx\") pod \"olm-operator-6b444d44fb-mkt5x\" (UID: \"d3068e5d-4ff8-438f-958b-f0f90e773ca1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" Feb 19 15:11:54 crc kubenswrapper[4810]: E0219 15:11:54.286600 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:54.78639606 +0000 UTC m=+144.268426184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.286631 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:54 crc kubenswrapper[4810]: E0219 15:11:54.288494 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:54.788483897 +0000 UTC m=+144.270514021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.287887 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.302237 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncbgj\" (UniqueName: \"kubernetes.io/projected/9b8f83b3-fabb-4404-88ce-e64c4db8a568-kube-api-access-ncbgj\") pod \"service-ca-operator-777779d784-n6fl9\" (UID: \"9b8f83b3-fabb-4404-88ce-e64c4db8a568\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.318026 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-bound-sa-token\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.330187 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.347619 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.349513 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.358802 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.365006 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfppl\" (UniqueName: \"kubernetes.io/projected/9d9ff005-2d60-4eb5-b8fa-59b84661617f-kube-api-access-dfppl\") pod \"dns-default-zgcrd\" (UID: \"9d9ff005-2d60-4eb5-b8fa-59b84661617f\") " pod="openshift-dns/dns-default-zgcrd" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.368716 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" event={"ID":"77544608-a940-4c0c-9a1a-a5a98f480134","Type":"ContainerStarted","Data":"007011a340e0d09a98b8c44a23894fcef1b5f303cc95cbdb4af47959fb716490"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.368771 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" event={"ID":"77544608-a940-4c0c-9a1a-a5a98f480134","Type":"ContainerStarted","Data":"3f00fd9a2efd649d59c4ed484657232da995bc534999cc19374c5d465c923bb7"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.369720 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth"] Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.369983 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45klr\" (UniqueName: \"kubernetes.io/projected/e3f898c2-3784-4157-b1c5-fdadcaf69bef-kube-api-access-45klr\") pod \"migrator-59844c95c7-w6k69\" (UID: \"e3f898c2-3784-4157-b1c5-fdadcaf69bef\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w6k69" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.372711 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dkppn"] Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.374262 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.376164 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb"] Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.384220 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" event={"ID":"c696fe96-0485-44d0-b4fb-161503c334e8","Type":"ContainerStarted","Data":"4cfc3c3a4162660b44696564144790aac4145b83976abe219d6df4b4502d2dc6"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.385686 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.390831 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.391800 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" Feb 19 15:11:54 crc kubenswrapper[4810]: E0219 15:11:54.392027 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:54.891999515 +0000 UTC m=+144.374029639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.402369 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.404872 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zgcrd" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.405736 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sl5p9"] Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.410539 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jld4k\" (UniqueName: \"kubernetes.io/projected/51a1f271-446d-42d2-b946-ad816257e990-kube-api-access-jld4k\") pod \"csi-hostpathplugin-4nds4\" (UID: \"51a1f271-446d-42d2-b946-ad816257e990\") " pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.412029 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" event={"ID":"a4b3e85d-d02e-4e13-8bab-aa86d2629d85","Type":"ContainerStarted","Data":"fb5c58eed043ec9926958facd1fa402602fa16dc839b95897784e8bb09f69736"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.424351 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wpzzq" event={"ID":"e8c9b478-4884-4c32-acf1-5fdec0cfac06","Type":"ContainerStarted","Data":"521ba8d34498bf4aa0dff4e5633490e78b4ea0c74bad270709ecee20deb1c27e"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.424416 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wpzzq" event={"ID":"e8c9b478-4884-4c32-acf1-5fdec0cfac06","Type":"ContainerStarted","Data":"eadc7dbbe4e29ff05dc2f2ea7386fe9ea9b5e63c2fe953be59a0fed2b43d32d8"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.426014 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" event={"ID":"fee373cf-50b8-42f4-b30d-4a3d230ca27e","Type":"ContainerStarted","Data":"3989d96419fba09da07c9e99ef43b0eb4df43c01359b3c17be42be6e9e26f30f"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.427784 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4hddt" event={"ID":"362cd55c-b576-44bd-843c-078bf26b3b1e","Type":"ContainerStarted","Data":"a1d8a2975e22eb56e23640790355f60287c10a0504259d614d431ce0dc78edbb"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.428836 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4nds4" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.429241 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" event={"ID":"239280e2-b335-4f87-89a8-00cb6f8e3c69","Type":"ContainerStarted","Data":"f40aa4e7f50582e05ed4f854cfdd01edd619d078425d9ec703efac5568e62189"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.437635 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9cg2\" (UniqueName: \"kubernetes.io/projected/500c6ae7-fd1f-4af6-ad6c-f1db0c2af222-kube-api-access-m9cg2\") pod \"ingress-canary-cdfxh\" (UID: \"500c6ae7-fd1f-4af6-ad6c-f1db0c2af222\") " pod="openshift-ingress-canary/ingress-canary-cdfxh" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.441431 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-25cmt"] Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.442947 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cdfxh" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.477660 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cbdn\" (UniqueName: \"kubernetes.io/projected/59eb54a2-ac34-429b-a275-7d365be9ad2f-kube-api-access-2cbdn\") pod \"machine-config-server-hlw9s\" (UID: \"59eb54a2-ac34-429b-a275-7d365be9ad2f\") " pod="openshift-machine-config-operator/machine-config-server-hlw9s" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.494492 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: E0219 15:11:54.494909 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:54.994893586 +0000 UTC m=+144.476923710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.503268 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" event={"ID":"f7aa90a6-963b-4ff0-b6de-ad46fa896e18","Type":"ContainerStarted","Data":"30f63e40e749a75dfb650e94c528a04f4c79cec4ef168e0a445ddba70fb1b578"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.509013 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hvw7f" event={"ID":"76ffdcba-57d6-4636-8373-f088926a716d","Type":"ContainerStarted","Data":"88d5c2e54e548489dab83140a0f451859ce48da18a62480421da02e8dbaacdb0"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.515033 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" event={"ID":"9a7776ca-1a56-4eca-9e44-ba1b7b15510f","Type":"ContainerStarted","Data":"257557aef8e6cd3ebd3462b014b8d22da58a9b420e2ca31f7c43087d3fa398e2"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.515096 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" event={"ID":"9a7776ca-1a56-4eca-9e44-ba1b7b15510f","Type":"ContainerStarted","Data":"b87f6328992f13d54a1ed3e925e1d3835339d68ebe81ed0173553e621b7c78da"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.557700 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" event={"ID":"0f46b00f-f770-4539-92f9-60e1146308ab","Type":"ContainerStarted","Data":"9d1149ddc52048dfa1aa76cbd88f80346d5338d7f3d8064a038a742f836264e5"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.595253 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:54 crc kubenswrapper[4810]: E0219 15:11:54.598867 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:55.098835535 +0000 UTC m=+144.580865659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.667955 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w6k69" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.671046 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" event={"ID":"8f4b9328-0efe-42a8-9a73-a80eb6a26151","Type":"ContainerStarted","Data":"cdd204fb4f3750e895bfe6ec3412aecfe2ba2f2ffb7194f6c70cdb6a4ef6b993"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.671106 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" event={"ID":"8f4b9328-0efe-42a8-9a73-a80eb6a26151","Type":"ContainerStarted","Data":"724adfe31aa7fad7a5da317cec629a9441c15ea118c52e1a602faab0955fb919"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.678341 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n"] Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.682977 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh"] Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.700651 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: E0219 15:11:54.701073 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:55.201056617 +0000 UTC m=+144.683086741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:54 crc kubenswrapper[4810]: W0219 15:11:54.728577 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd317655_38ea_4fdb_95d0_82adc08456a8.slice/crio-6d4f444c4ef909a4018798854927d2c08e92bcbe7bcb6657cd36b18b543b34ef WatchSource:0}: Error finding container 6d4f444c4ef909a4018798854927d2c08e92bcbe7bcb6657cd36b18b543b34ef: Status 404 returned error can't find the container with id 6d4f444c4ef909a4018798854927d2c08e92bcbe7bcb6657cd36b18b543b34ef Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.730700 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" event={"ID":"0e4ea5aa-2074-4100-a916-6bdfb3331d43","Type":"ContainerStarted","Data":"56f2f0ec8373e4a7100fbe6a0f0a7408ac55c7274506628fbc45edb2a88d3e00"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.731017 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" event={"ID":"0e4ea5aa-2074-4100-a916-6bdfb3331d43","Type":"ContainerStarted","Data":"54ad6164609fa4bf8b377833d348d622ed2bede550b38e85012c018a8367e0d4"} Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.734020 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hlw9s" Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.768012 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b"] Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.807232 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.818118 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zrsn2"] Feb 19 15:11:54 crc kubenswrapper[4810]: E0219 15:11:54.821857 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:55.321833379 +0000 UTC m=+144.803863503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.857356 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd"] Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.872805 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r74mv"] Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.918357 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh"] Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.918519 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:54 crc kubenswrapper[4810]: E0219 15:11:54.919003 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:55.418985182 +0000 UTC m=+144.901015306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.934983 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk"] Feb 19 15:11:54 crc kubenswrapper[4810]: I0219 15:11:54.966029 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq"] Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.029023 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:55 crc kubenswrapper[4810]: E0219 15:11:55.029524 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:55.529212704 +0000 UTC m=+145.011242828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.029680 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:55 crc kubenswrapper[4810]: E0219 15:11:55.030137 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:55.530119399 +0000 UTC m=+145.012149513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.122262 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw"] Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.131236 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:55 crc kubenswrapper[4810]: E0219 15:11:55.131833 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:55.631812976 +0000 UTC m=+145.113843100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.233436 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:55 crc kubenswrapper[4810]: E0219 15:11:55.235182 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:55.734934293 +0000 UTC m=+145.216964417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:55 crc kubenswrapper[4810]: W0219 15:11:55.265972 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59eb54a2_ac34_429b_a275_7d365be9ad2f.slice/crio-57a99ec104b993daec9138ab90fe7aa1004eeeb9fb7b47a9a690d2f75cb7076a WatchSource:0}: Error finding container 57a99ec104b993daec9138ab90fe7aa1004eeeb9fb7b47a9a690d2f75cb7076a: Status 404 returned error can't find the container with id 57a99ec104b993daec9138ab90fe7aa1004eeeb9fb7b47a9a690d2f75cb7076a Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.335970 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:55 crc kubenswrapper[4810]: E0219 15:11:55.339070 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:55.839046417 +0000 UTC m=+145.321076541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.348636 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml"] Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.452721 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:55 crc kubenswrapper[4810]: E0219 15:11:55.453720 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:55.953703322 +0000 UTC m=+145.435733446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.528307 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x"] Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.554443 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:55 crc kubenswrapper[4810]: E0219 15:11:55.554733 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:56.0546841 +0000 UTC m=+145.536714224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.555239 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:55 crc kubenswrapper[4810]: E0219 15:11:55.555751 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:56.055743549 +0000 UTC m=+145.537773673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.656648 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk"] Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.669588 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:55 crc kubenswrapper[4810]: E0219 15:11:55.670073 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:56.170053233 +0000 UTC m=+145.652083357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.702533 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" podStartSLOduration=123.702507376 podStartE2EDuration="2m3.702507376s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:55.670530987 +0000 UTC m=+145.152561111" watchObservedRunningTime="2026-02-19 15:11:55.702507376 +0000 UTC m=+145.184537500" Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.773840 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:55 crc kubenswrapper[4810]: E0219 15:11:55.774546 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:56.274531728 +0000 UTC m=+145.756561852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.781187 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" event={"ID":"5cc06c55-7085-4cc0-8399-833b4243b51e","Type":"ContainerStarted","Data":"c8b2a88dff64984dccbf60956cd019d8c8c69f8324695b17d3af2bbfc0d23a72"} Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.790862 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" event={"ID":"fee373cf-50b8-42f4-b30d-4a3d230ca27e","Type":"ContainerStarted","Data":"5b07111955df88d3b4e36c415c8d6b917021b9b7758c003f572c5b199fd318d9"} Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.808975 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" event={"ID":"ad43df2c-4944-45e2-919f-0c297f4092d4","Type":"ContainerStarted","Data":"b43634a16e9cdfb66105b0eff470452a3406e103e0c90e02477b5f9de0072e03"} Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.809043 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" event={"ID":"ad43df2c-4944-45e2-919f-0c297f4092d4","Type":"ContainerStarted","Data":"8973ecaad02b7aeec4f31a47d961686dc669236fd8c026776f4e494af608cf1b"} Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.810482 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.820180 4810 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-n8zth container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.820229 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" podUID="ad43df2c-4944-45e2-919f-0c297f4092d4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.853172 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv"] Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.876875 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:55 crc kubenswrapper[4810]: E0219 15:11:55.894903 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:56.394853068 +0000 UTC m=+145.876883412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.899444 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" event={"ID":"77544608-a940-4c0c-9a1a-a5a98f480134","Type":"ContainerStarted","Data":"93dcb98ad4742d66dab7aacdcb5a3a8c191921c6d151afcc27724a90e94526cf"} Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.910411 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" event={"ID":"026a2812-dc11-4b60-b911-bb41a0d39d7d","Type":"ContainerStarted","Data":"1ff7e02020c5142042143d63aae5530e3213e86b9c32b9d0bf9607e45acf8227"} Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.914258 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r2tqm"] Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.917103 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" event={"ID":"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6","Type":"ContainerStarted","Data":"918504ecdc8f526f00360e8dd63cb3df985da70843c55db6ce3d06ae23d89251"} Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.928048 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" podStartSLOduration=123.927995539 podStartE2EDuration="2m3.927995539s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:55.894307953 +0000 UTC m=+145.376338077" watchObservedRunningTime="2026-02-19 15:11:55.927995539 +0000 UTC m=+145.410025653" Feb 19 15:11:55 crc kubenswrapper[4810]: W0219 15:11:55.949582 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c7e5d88_e6b7_416c_abbd_eed95cc772de.slice/crio-ad1fce03b3ad1750486622c5e19772212fbe373da005f0287038be329e0b340f WatchSource:0}: Error finding container ad1fce03b3ad1750486622c5e19772212fbe373da005f0287038be329e0b340f: Status 404 returned error can't find the container with id ad1fce03b3ad1750486622c5e19772212fbe373da005f0287038be329e0b340f Feb 19 15:11:55 crc kubenswrapper[4810]: I0219 15:11:55.981367 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:55 crc kubenswrapper[4810]: E0219 15:11:55.984698 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:56.484675119 +0000 UTC m=+145.966705243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.021037 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" event={"ID":"a4b3e85d-d02e-4e13-8bab-aa86d2629d85","Type":"ContainerStarted","Data":"1ee9376227c338a67f41bb1e8709c6a10ad973763bdae3460d5f5bfe180e269d"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.031035 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" event={"ID":"63674760-f499-49b8-a575-a8ae954eada4","Type":"ContainerStarted","Data":"0bc300969ec8d531d557a6a53297ce22a290ca361d42de96194eab1ad4a502b6"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.052118 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8mxdc" podStartSLOduration=124.052092093 podStartE2EDuration="2m4.052092093s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.050110129 +0000 UTC m=+145.532140253" watchObservedRunningTime="2026-02-19 15:11:56.052092093 +0000 UTC m=+145.534122217" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.061285 4810 generic.go:334] "Generic (PLEG): container finished" podID="c696fe96-0485-44d0-b4fb-161503c334e8" containerID="19f280cf9c36b378a47999cd163230981c655a13662892691e3cba02c8a1e6ca" exitCode=0 Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.062189 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" event={"ID":"c696fe96-0485-44d0-b4fb-161503c334e8","Type":"ContainerDied","Data":"19f280cf9c36b378a47999cd163230981c655a13662892691e3cba02c8a1e6ca"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.088350 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-h68pj"] Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.089714 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:56 crc kubenswrapper[4810]: E0219 15:11:56.090467 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:56.590446398 +0000 UTC m=+146.072476512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.093604 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zgcrd"] Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.106387 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" event={"ID":"9a7776ca-1a56-4eca-9e44-ba1b7b15510f","Type":"ContainerStarted","Data":"caa24f0209a0103d15ad0edc37f840b56250f9d3ee7fee9166d950d7a854c17a"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.111931 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jm2nk" event={"ID":"239280e2-b335-4f87-89a8-00cb6f8e3c69","Type":"ContainerStarted","Data":"e22c0b66bd2edf2cc9e2215240a0a048e856f1ac853f7478d160bbf2ed8bfe87"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.116199 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-d8pqg" podStartSLOduration=123.116172566 podStartE2EDuration="2m3.116172566s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.115793176 +0000 UTC m=+145.597823290" watchObservedRunningTime="2026-02-19 15:11:56.116172566 +0000 UTC m=+145.598202690" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.120284 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" event={"ID":"b1e6fd37-d069-441e-9c0f-caf60d8f8d4f","Type":"ContainerStarted","Data":"6e3e16aaad4c49b043c295a1cde93cc27f9c6489b08a009bea8ceb47ace5b8a7"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.145253 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dkppn" event={"ID":"7a29951a-027e-49b4-a7ea-a8e363942414","Type":"ContainerStarted","Data":"def13ac9f3c80f2633deeaca2f3aa28e400ac5dc93d6737800aaec0153001453"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.147200 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dkppn" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.160009 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.160469 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.160337 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" event={"ID":"ef2828b3-f501-4105-abc8-6b1ce9658301","Type":"ContainerStarted","Data":"8ce155f4159ba341fb65543f9f8c0c293a0fca56dd0fb197345bf6368f2d8430"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.167026 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hlw9s" event={"ID":"59eb54a2-ac34-429b-a275-7d365be9ad2f","Type":"ContainerStarted","Data":"57a99ec104b993daec9138ab90fe7aa1004eeeb9fb7b47a9a690d2f75cb7076a"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.192021 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:56 crc kubenswrapper[4810]: E0219 15:11:56.194651 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:56.694629364 +0000 UTC m=+146.176659488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.200130 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hvw7f" event={"ID":"76ffdcba-57d6-4636-8373-f088926a716d","Type":"ContainerStarted","Data":"079683c2352488e17623f733b719501b4fd3f0ddbd65095b33566865f235c2ff"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.212864 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" podStartSLOduration=123.212833245 podStartE2EDuration="2m3.212833245s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.146294435 +0000 UTC m=+145.628324559" watchObservedRunningTime="2026-02-19 15:11:56.212833245 +0000 UTC m=+145.694863369" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.216161 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kclsk" podStartSLOduration=123.216142996 podStartE2EDuration="2m3.216142996s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.202536742 +0000 UTC m=+145.684566866" watchObservedRunningTime="2026-02-19 15:11:56.216142996 +0000 UTC m=+145.698173110" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.224873 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" event={"ID":"bd317655-38ea-4fdb-95d0-82adc08456a8","Type":"ContainerStarted","Data":"6d4f444c4ef909a4018798854927d2c08e92bcbe7bcb6657cd36b18b543b34ef"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.245441 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xhg7l" podStartSLOduration=123.245412011 podStartE2EDuration="2m3.245412011s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.244244029 +0000 UTC m=+145.726274153" watchObservedRunningTime="2026-02-19 15:11:56.245412011 +0000 UTC m=+145.727442135" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.255489 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" event={"ID":"c18fb461-ce5b-43ad-85ca-305c3f8a7d46","Type":"ContainerStarted","Data":"a5b8d6a2012cb01f6278524103e645fa596a71b16bda554c88859e183269d288"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.270529 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zrsn2" event={"ID":"13f4a10d-50aa-41ec-9931-cb835ba1f54c","Type":"ContainerStarted","Data":"697a9c21c216e48b42269ff99e0a27246ada05bb0cfec928ad6adc7a3b7dfc0e"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.271052 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-hlw9s" podStartSLOduration=5.271030506 podStartE2EDuration="5.271030506s" podCreationTimestamp="2026-02-19 15:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.270637885 +0000 UTC m=+145.752668009" watchObservedRunningTime="2026-02-19 15:11:56.271030506 +0000 UTC m=+145.753060620" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.293812 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:56 crc kubenswrapper[4810]: E0219 15:11:56.295914 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:56.79588717 +0000 UTC m=+146.277917294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.296859 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" event={"ID":"8383a9e3-149b-4512-a9fd-12cd0b65e370","Type":"ContainerStarted","Data":"28491df923b9f9d021efc8a5dd0472dd72e0e7e179d81a8ba24dae22fa983519"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.331206 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4hddt" event={"ID":"362cd55c-b576-44bd-843c-078bf26b3b1e","Type":"ContainerStarted","Data":"f08283b34555217bd3055e9709afdf48fe7bc7bac3a42fcf9d68868beda18106"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.333604 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-l66cb" podStartSLOduration=123.333592337 podStartE2EDuration="2m3.333592337s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.329635098 +0000 UTC m=+145.811665222" watchObservedRunningTime="2026-02-19 15:11:56.333592337 +0000 UTC m=+145.815622451" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.360741 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-55x9c" event={"ID":"f7aa90a6-963b-4ff0-b6de-ad46fa896e18","Type":"ContainerStarted","Data":"66a4dd2101aef46f877ee54c0bc2cd6bef63a169b280c404cfe51166671c33db"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.405316 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" event={"ID":"b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3","Type":"ContainerStarted","Data":"c70d6dcda85722487ba517667cc3541d1e921cb75ec6411edace40c22883a80c"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.406972 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:56 crc kubenswrapper[4810]: E0219 15:11:56.407585 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:56.907552092 +0000 UTC m=+146.389582216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.428881 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9"] Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.443589 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb" event={"ID":"25e05c8a-335b-405c-9033-f689c21c5ecc","Type":"ContainerStarted","Data":"6c1f93bc451f045e7ecf4f2a061a155dfe1a74fde876c581e5b01b87f5b9cbe9"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.456778 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wpzzq" event={"ID":"e8c9b478-4884-4c32-acf1-5fdec0cfac06","Type":"ContainerStarted","Data":"daa16aa602b5e2229fd5e812438b46bfa9aeab028776fae426ca3979d07a8b12"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.464430 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cdfxh"] Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.466294 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w6k69"] Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.472231 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" event={"ID":"0f46b00f-f770-4539-92f9-60e1146308ab","Type":"ContainerStarted","Data":"c95535592b477e29bf45cc0f9dac15fc777217f356e754d6cd687ad25e827210"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.472277 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" event={"ID":"0f46b00f-f770-4539-92f9-60e1146308ab","Type":"ContainerStarted","Data":"3003c524a1a06f61e29fbc5a85cb4540506b1c36ddf20b35eb80aa3726f139b5"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.475839 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" event={"ID":"6a43ce91-6673-4641-a2d6-551afe72688d","Type":"ContainerStarted","Data":"418927cc81e88b2f74b96db863cf7acce36f70d0978d3506c79ab078ab3e5026"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.504757 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4nds4"] Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.508942 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:56 crc kubenswrapper[4810]: E0219 15:11:56.510099 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:57.010079532 +0000 UTC m=+146.492109646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.547015 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" event={"ID":"b7a0b611-cb8e-431b-b527-b6164471c85f","Type":"ContainerStarted","Data":"5a285a1bca8274979baec136d5aba84737e9df208c7943b87864477fe5900c2e"} Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.548083 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" podStartSLOduration=124.548061097 podStartE2EDuration="2m4.548061097s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.546055262 +0000 UTC m=+146.028085406" watchObservedRunningTime="2026-02-19 15:11:56.548061097 +0000 UTC m=+146.030091221" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.610368 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:56 crc kubenswrapper[4810]: E0219 15:11:56.611070 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:57.1110519 +0000 UTC m=+146.593082024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:56 crc kubenswrapper[4810]: W0219 15:11:56.623669 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod500c6ae7_fd1f_4af6_ad6c_f1db0c2af222.slice/crio-4b2c43301c34bb1c2be42ee14c12d2749dce24b2f61ffd4dcf0b7cbdb2ea4ec7 WatchSource:0}: Error finding container 4b2c43301c34bb1c2be42ee14c12d2749dce24b2f61ffd4dcf0b7cbdb2ea4ec7: Status 404 returned error can't find the container with id 4b2c43301c34bb1c2be42ee14c12d2749dce24b2f61ffd4dcf0b7cbdb2ea4ec7 Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.626732 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-frmnw"] Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.641202 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-dkppn" podStartSLOduration=124.641172219 podStartE2EDuration="2m4.641172219s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.638646889 +0000 UTC m=+146.120677013" watchObservedRunningTime="2026-02-19 15:11:56.641172219 +0000 UTC m=+146.123202343" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.694722 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hvw7f" podStartSLOduration=123.694698141 podStartE2EDuration="2m3.694698141s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.678213528 +0000 UTC m=+146.160243652" watchObservedRunningTime="2026-02-19 15:11:56.694698141 +0000 UTC m=+146.176728265" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.711851 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:56 crc kubenswrapper[4810]: E0219 15:11:56.713298 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:57.213265732 +0000 UTC m=+146.695296016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.715255 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f4qrf" podStartSLOduration=123.715194005 podStartE2EDuration="2m3.715194005s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.713457707 +0000 UTC m=+146.195487831" watchObservedRunningTime="2026-02-19 15:11:56.715194005 +0000 UTC m=+146.197224129" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.751874 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4hddt" podStartSLOduration=124.751849833 podStartE2EDuration="2m4.751849833s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.74918255 +0000 UTC m=+146.231212674" watchObservedRunningTime="2026-02-19 15:11:56.751849833 +0000 UTC m=+146.233879957" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.813452 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" podStartSLOduration=123.813412777 podStartE2EDuration="2m3.813412777s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.803613328 +0000 UTC m=+146.285643452" watchObservedRunningTime="2026-02-19 15:11:56.813412777 +0000 UTC m=+146.295442901" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.820207 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:56 crc kubenswrapper[4810]: E0219 15:11:56.820728 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:57.320711288 +0000 UTC m=+146.802741402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.923852 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:56 crc kubenswrapper[4810]: E0219 15:11:56.924577 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:57.424551914 +0000 UTC m=+146.906582038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.924614 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.944066 4810 patch_prober.go:28] interesting pod/router-default-5444994796-hvw7f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 15:11:56 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Feb 19 15:11:56 crc kubenswrapper[4810]: [+]process-running ok Feb 19 15:11:56 crc kubenswrapper[4810]: healthz check failed Feb 19 15:11:56 crc kubenswrapper[4810]: I0219 15:11:56.944543 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvw7f" podUID="76ffdcba-57d6-4636-8373-f088926a716d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.026014 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:57 crc kubenswrapper[4810]: E0219 15:11:57.026445 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:57.526431567 +0000 UTC m=+147.008461681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.132141 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:57 crc kubenswrapper[4810]: E0219 15:11:57.132638 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:57.632616488 +0000 UTC m=+147.114646612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.215370 4810 csr.go:261] certificate signing request csr-kl2l4 is approved, waiting to be issued Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.234810 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.234977 4810 csr.go:257] certificate signing request csr-kl2l4 is issued Feb 19 15:11:57 crc kubenswrapper[4810]: E0219 15:11:57.236089 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:57.735893259 +0000 UTC m=+147.217923383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.338533 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:57 crc kubenswrapper[4810]: E0219 15:11:57.339347 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:57.839308784 +0000 UTC m=+147.321338908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.447566 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:57 crc kubenswrapper[4810]: E0219 15:11:57.447968 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:57.947947853 +0000 UTC m=+147.429977977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.549424 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:57 crc kubenswrapper[4810]: E0219 15:11:57.549650 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:58.04960117 +0000 UTC m=+147.531631294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.549726 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:57 crc kubenswrapper[4810]: E0219 15:11:57.550112 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:58.050104183 +0000 UTC m=+147.532134297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.594625 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" event={"ID":"0c7e5d88-e6b7-416c-abbd-eed95cc772de","Type":"ContainerStarted","Data":"03d1da5fa69b0120b4edb3d8d72d814e92881d0def41990e885bbb57dcbed6f1"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.594676 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" event={"ID":"0c7e5d88-e6b7-416c-abbd-eed95cc772de","Type":"ContainerStarted","Data":"ad1fce03b3ad1750486622c5e19772212fbe373da005f0287038be329e0b340f"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.598859 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w6k69" event={"ID":"e3f898c2-3784-4157-b1c5-fdadcaf69bef","Type":"ContainerStarted","Data":"478a0a3f318d9b05dedcd04ea22ce7fdc59f6a62f343db5dab5c45f7df34f58b"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.605523 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.615726 4810 generic.go:334] "Generic (PLEG): container finished" podID="bd317655-38ea-4fdb-95d0-82adc08456a8" containerID="28e82457805a6bf079bf2dad976697aad3c472aa72e00d44a0a62bb4d125ff0a" exitCode=0 Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.615813 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" event={"ID":"bd317655-38ea-4fdb-95d0-82adc08456a8","Type":"ContainerDied","Data":"28e82457805a6bf079bf2dad976697aad3c472aa72e00d44a0a62bb4d125ff0a"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.616186 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwvxk" podStartSLOduration=124.616161401 podStartE2EDuration="2m4.616161401s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:57.614846034 +0000 UTC m=+147.096876158" watchObservedRunningTime="2026-02-19 15:11:57.616161401 +0000 UTC m=+147.098191525" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.617793 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wpzzq" podStartSLOduration=124.617786165 podStartE2EDuration="2m4.617786165s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:56.857000606 +0000 UTC m=+146.339030730" watchObservedRunningTime="2026-02-19 15:11:57.617786165 +0000 UTC m=+147.099816289" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.619173 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" event={"ID":"c18fb461-ce5b-43ad-85ca-305c3f8a7d46","Type":"ContainerStarted","Data":"1540e32985c892982c268c96e96e5e47cb3be178ce36f945dc12ccf2635f82d2"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.622451 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.625253 4810 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-r74mv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" start-of-body= Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.625316 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.635348 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l924n" event={"ID":"b7a0b611-cb8e-431b-b527-b6164471c85f","Type":"ContainerStarted","Data":"417b1c9ef68274eb78636581f01b1e27c85f526832c5afc8942d6f5d65fbce96"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.643064 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" event={"ID":"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9","Type":"ContainerStarted","Data":"20e10c5e198ddee65262d0528102f59529d30ad7bead31c5ab5c764fe94b9de0"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.643128 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" event={"ID":"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9","Type":"ContainerStarted","Data":"23f5d4bb5ee04c131b85331f5b6ae4b924ed6b6ac6634c148de3748ba4fdc4ad"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.643437 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.645214 4810 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-r2tqm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.645357 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" podUID="b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.645774 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" event={"ID":"5cc06c55-7085-4cc0-8399-833b4243b51e","Type":"ContainerStarted","Data":"3d9d1cfaea4bc4e51b6abb70330420f848060901beca560335cc25805281c669"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.647559 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4nds4" event={"ID":"51a1f271-446d-42d2-b946-ad816257e990","Type":"ContainerStarted","Data":"d3b0a9e492aad83b4cf9a1e4c4559e72e6ceb284e2bdb13c44e59af397812c51"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.649357 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zgcrd" event={"ID":"9d9ff005-2d60-4eb5-b8fa-59b84661617f","Type":"ContainerStarted","Data":"e50bf417d204b2f09e8c39f3284669676184c4b43195ceec33490c41a512a600"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.650363 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:57 crc kubenswrapper[4810]: E0219 15:11:57.652676 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:58.152644914 +0000 UTC m=+147.634675038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.654035 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" event={"ID":"b1e6fd37-d069-441e-9c0f-caf60d8f8d4f","Type":"ContainerStarted","Data":"b72ae89c857a22287ea3377fd79343c983ace6b61cae026d7092f60f166bba01"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.659986 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dkppn" event={"ID":"7a29951a-027e-49b4-a7ea-a8e363942414","Type":"ContainerStarted","Data":"248f3396a274a7c02cca57cafc68effaca4124e54f5008022c7525d6c545f088"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.661129 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.661186 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.662762 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" event={"ID":"d3068e5d-4ff8-438f-958b-f0f90e773ca1","Type":"ContainerStarted","Data":"0da1adf715ffdda188ae81236960cc45f0f62f1e9b196e4f1787585c4749917b"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.662862 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" event={"ID":"d3068e5d-4ff8-438f-958b-f0f90e773ca1","Type":"ContainerStarted","Data":"6f2e9520232ccf40a4afb1888425bf5a4d15f80f23bf1c3779655c495f6cbea9"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.663373 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.667698 4810 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mkt5x container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.667808 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" podUID="d3068e5d-4ff8-438f-958b-f0f90e773ca1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.688733 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" event={"ID":"b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3","Type":"ContainerStarted","Data":"17c9e8b8353f936cab2574451ae92e4f6600cade690dc61485fe9ca9793dedec"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.699538 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.700125 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.702400 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" podStartSLOduration=125.702380012 podStartE2EDuration="2m5.702380012s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:57.699726459 +0000 UTC m=+147.181756573" watchObservedRunningTime="2026-02-19 15:11:57.702380012 +0000 UTC m=+147.184410136" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.722633 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" event={"ID":"6a43ce91-6673-4641-a2d6-551afe72688d","Type":"ContainerStarted","Data":"725a8cab464d281a961e6feaf009a3b4a1df8ec4670406a42d12eba66bc52eef"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.724607 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.739390 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8j2gh" podStartSLOduration=124.73936417 podStartE2EDuration="2m4.73936417s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:57.737684674 +0000 UTC m=+147.219714798" watchObservedRunningTime="2026-02-19 15:11:57.73936417 +0000 UTC m=+147.221394294" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.739946 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" event={"ID":"8383a9e3-149b-4512-a9fd-12cd0b65e370","Type":"ContainerStarted","Data":"c19d21352b758655d40944eafe4d1d6cfee80125c13e6f74424f93ccd9aee7cf"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.763969 4810 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hmxtq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.764028 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" podUID="6a43ce91-6673-4641-a2d6-551afe72688d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.765097 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:57 crc kubenswrapper[4810]: E0219 15:11:57.766720 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:58.266699452 +0000 UTC m=+147.748729576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.782665 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" event={"ID":"9b8f83b3-fabb-4404-88ce-e64c4db8a568","Type":"ContainerStarted","Data":"b67a2ca301f32b5bba0c87aa2c08dea248aa7238bcaa9a1aab349c455f01385f"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.792715 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" podStartSLOduration=124.792691907 podStartE2EDuration="2m4.792691907s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:57.790571649 +0000 UTC m=+147.272601773" watchObservedRunningTime="2026-02-19 15:11:57.792691907 +0000 UTC m=+147.274722031" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.812996 4810 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5d4rp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 15:11:57 crc kubenswrapper[4810]: [+]log ok Feb 19 15:11:57 crc kubenswrapper[4810]: [+]etcd ok Feb 19 15:11:57 crc kubenswrapper[4810]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 15:11:57 crc kubenswrapper[4810]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 15:11:57 crc kubenswrapper[4810]: [+]poststarthook/max-in-flight-filter ok Feb 19 15:11:57 crc kubenswrapper[4810]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 15:11:57 crc kubenswrapper[4810]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 19 15:11:57 crc kubenswrapper[4810]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 19 15:11:57 crc kubenswrapper[4810]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 19 15:11:57 crc kubenswrapper[4810]: [+]poststarthook/project.openshift.io-projectcache ok Feb 19 15:11:57 crc kubenswrapper[4810]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 19 15:11:57 crc kubenswrapper[4810]: [+]poststarthook/openshift.io-startinformers ok Feb 19 15:11:57 crc kubenswrapper[4810]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 19 15:11:57 crc kubenswrapper[4810]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 15:11:57 crc kubenswrapper[4810]: livez check failed Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.813083 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" podUID="a4b3e85d-d02e-4e13-8bab-aa86d2629d85" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.817382 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb" event={"ID":"25e05c8a-335b-405c-9033-f689c21c5ecc","Type":"ContainerStarted","Data":"202cd18f5d7c80a1e69ade4303444999603aab1e57b5b36c43cfeab608e76af1"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.843683 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" event={"ID":"ef2828b3-f501-4105-abc8-6b1ce9658301","Type":"ContainerStarted","Data":"56611b5e8046125630e65a48127c89c1579026b4e92c66468343073cf75da26c"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.844857 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.851505 4810 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-2phqk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.851567 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" podUID="ef2828b3-f501-4105-abc8-6b1ce9658301" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.868505 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:57 crc kubenswrapper[4810]: E0219 15:11:57.869805 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:58.369785578 +0000 UTC m=+147.851815702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.874055 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" event={"ID":"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6","Type":"ContainerStarted","Data":"dc986c76de8632d646d80420e0f140b9a738f5647ff326454ca42eca27431ca5"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.875069 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.881429 4810 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-sl5p9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.881471 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" podUID="6f0a78b8-77b3-47dd-9dbe-e578b0374cf6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.885721 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" event={"ID":"74295390-5384-402d-8c5b-dc2559bb6d9c","Type":"ContainerStarted","Data":"c1c70d9edb3fc636663264f1069b22c28c56ea4c83e4d1e90ce90161570a061c"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.890441 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv" event={"ID":"64dc0d58-11d4-456b-97ab-a4d3ec28225b","Type":"ContainerStarted","Data":"edb528b7805eeb9b438b67a66e2248b6ee9c5aa489bd899fce4c6b547b3d343b"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.890470 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv" event={"ID":"64dc0d58-11d4-456b-97ab-a4d3ec28225b","Type":"ContainerStarted","Data":"81fea48dc549d3dae06b3f0a36281a04deb8df53595d251f4e43f11ceb3755c6"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.894458 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" podStartSLOduration=124.894440836 podStartE2EDuration="2m4.894440836s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:57.887882436 +0000 UTC m=+147.369912560" watchObservedRunningTime="2026-02-19 15:11:57.894440836 +0000 UTC m=+147.376470960" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.912722 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb" podStartSLOduration=125.912701998 podStartE2EDuration="2m5.912701998s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:57.909697706 +0000 UTC m=+147.391727820" watchObservedRunningTime="2026-02-19 15:11:57.912701998 +0000 UTC m=+147.394732122" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.914902 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zrsn2" event={"ID":"13f4a10d-50aa-41ec-9931-cb835ba1f54c","Type":"ContainerStarted","Data":"2a6589f5ecdc63853851db8ee9c7337f46484398d1ef968d041031fbf184b894"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.935610 4810 patch_prober.go:28] interesting pod/router-default-5444994796-hvw7f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 15:11:57 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Feb 19 15:11:57 crc kubenswrapper[4810]: [+]process-running ok Feb 19 15:11:57 crc kubenswrapper[4810]: healthz check failed Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.935702 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvw7f" podUID="76ffdcba-57d6-4636-8373-f088926a716d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.962001 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-h68pj" event={"ID":"969b9cb6-f89f-47e9-b8f7-754804a41dea","Type":"ContainerStarted","Data":"26f3a7f9b20109269e9a2e47b0214f0759582a1f5ef12e8bd49e8ec1dd79876c"} Feb 19 15:11:57 crc kubenswrapper[4810]: I0219 15:11:57.963340 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.004002 4810 patch_prober.go:28] interesting pod/console-operator-58897d9998-h68pj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/readyz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.004065 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-h68pj" podUID="969b9cb6-f89f-47e9-b8f7-754804a41dea" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/readyz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.005899 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:58 crc kubenswrapper[4810]: E0219 15:11:58.007512 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:58.507489226 +0000 UTC m=+147.989519350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.020057 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" event={"ID":"63674760-f499-49b8-a575-a8ae954eada4","Type":"ContainerStarted","Data":"8f55397a990d53e8fdcf63205f71dbdf1875eff2c346ffa6c66631e0b4440b65"} Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.022010 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" podStartSLOduration=125.021981935 podStartE2EDuration="2m5.021981935s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:57.997340927 +0000 UTC m=+147.479371051" watchObservedRunningTime="2026-02-19 15:11:58.021981935 +0000 UTC m=+147.504012059" Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.042888 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hlw9s" event={"ID":"59eb54a2-ac34-429b-a275-7d365be9ad2f","Type":"ContainerStarted","Data":"254dcc44696270546043618fad29adfb2ea027a838c257af261d9174dee95d07"} Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.048813 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" podStartSLOduration=125.048773932 podStartE2EDuration="2m5.048773932s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:57.947864576 +0000 UTC m=+147.429894700" watchObservedRunningTime="2026-02-19 15:11:58.048773932 +0000 UTC m=+147.530804056" Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.055993 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" podStartSLOduration=126.05597007 podStartE2EDuration="2m6.05597007s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:58.045358968 +0000 UTC m=+147.527389092" watchObservedRunningTime="2026-02-19 15:11:58.05597007 +0000 UTC m=+147.538000194" Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.074469 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cdfxh" event={"ID":"500c6ae7-fd1f-4af6-ad6c-f1db0c2af222","Type":"ContainerStarted","Data":"4b2c43301c34bb1c2be42ee14c12d2749dce24b2f61ffd4dcf0b7cbdb2ea4ec7"} Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.084834 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.113292 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.114353 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" podStartSLOduration=125.114304724 podStartE2EDuration="2m5.114304724s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:58.112539126 +0000 UTC m=+147.594569250" watchObservedRunningTime="2026-02-19 15:11:58.114304724 +0000 UTC m=+147.596334848" Feb 19 15:11:58 crc kubenswrapper[4810]: E0219 15:11:58.115365 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:58.615344673 +0000 UTC m=+148.097374797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.191567 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-h68pj" podStartSLOduration=126.191551689 podStartE2EDuration="2m6.191551689s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:58.148059533 +0000 UTC m=+147.630089667" watchObservedRunningTime="2026-02-19 15:11:58.191551689 +0000 UTC m=+147.673581813" Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.193249 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-895xv" podStartSLOduration=125.193243426 podStartE2EDuration="2m5.193243426s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:58.189773021 +0000 UTC m=+147.671803145" watchObservedRunningTime="2026-02-19 15:11:58.193243426 +0000 UTC m=+147.675273550" Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.222672 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:58 crc kubenswrapper[4810]: E0219 15:11:58.223210 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:58.72319306 +0000 UTC m=+148.205223194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.240434 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 15:06:57 +0000 UTC, rotation deadline is 2026-12-31 18:02:41.286319873 +0000 UTC Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.240492 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7562h50m43.045831627s for next certificate rotation Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.324339 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:58 crc kubenswrapper[4810]: E0219 15:11:58.324785 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:58.824764923 +0000 UTC m=+148.306795047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.425753 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:58 crc kubenswrapper[4810]: E0219 15:11:58.426439 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:58.92642653 +0000 UTC m=+148.408456654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.527389 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:58 crc kubenswrapper[4810]: E0219 15:11:58.527728 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.027705186 +0000 UTC m=+148.509735300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.629746 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:58 crc kubenswrapper[4810]: E0219 15:11:58.630291 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.130268068 +0000 UTC m=+148.612298192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.730988 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:58 crc kubenswrapper[4810]: E0219 15:11:58.731407 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.231355749 +0000 UTC m=+148.713385873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.731583 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:58 crc kubenswrapper[4810]: E0219 15:11:58.732291 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.232284244 +0000 UTC m=+148.714314368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.833131 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:58 crc kubenswrapper[4810]: E0219 15:11:58.833370 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.333293353 +0000 UTC m=+148.815323477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.833941 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:58 crc kubenswrapper[4810]: E0219 15:11:58.834465 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.334448625 +0000 UTC m=+148.816478749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.922721 4810 patch_prober.go:28] interesting pod/router-default-5444994796-hvw7f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 15:11:58 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Feb 19 15:11:58 crc kubenswrapper[4810]: [+]process-running ok Feb 19 15:11:58 crc kubenswrapper[4810]: healthz check failed Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.922805 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvw7f" podUID="76ffdcba-57d6-4636-8373-f088926a716d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.935193 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:58 crc kubenswrapper[4810]: E0219 15:11:58.935413 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.435376841 +0000 UTC m=+148.917406965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:58 crc kubenswrapper[4810]: I0219 15:11:58.935598 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:58 crc kubenswrapper[4810]: E0219 15:11:58.935986 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.435979648 +0000 UTC m=+148.918009772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.036593 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:59 crc kubenswrapper[4810]: E0219 15:11:59.037044 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.537024157 +0000 UTC m=+149.019054281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.080981 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zgcrd" event={"ID":"9d9ff005-2d60-4eb5-b8fa-59b84661617f","Type":"ContainerStarted","Data":"a0af2a85460b19646d885a9256a84dadad96f69cd267f25a21c80efe282ec882"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.081033 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zgcrd" event={"ID":"9d9ff005-2d60-4eb5-b8fa-59b84661617f","Type":"ContainerStarted","Data":"ae85b1ddc4b1ab2da9cea5f9ee9ebd0a2db8c3b79c669a3398728442a02a3d05"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.081109 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zgcrd" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.082619 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" event={"ID":"9b8f83b3-fabb-4404-88ce-e64c4db8a568","Type":"ContainerStarted","Data":"205358139c66fa88dffe0a0748c08c915ce0d1c24490505b18156024f28936f7"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.084626 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w6k69" event={"ID":"e3f898c2-3784-4157-b1c5-fdadcaf69bef","Type":"ContainerStarted","Data":"0d83aec25fc1a573e7c3e19ad4c8affa483842fbb3a3e191159c8cbc243fed32"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.084680 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w6k69" event={"ID":"e3f898c2-3784-4157-b1c5-fdadcaf69bef","Type":"ContainerStarted","Data":"b38c1b3ffd216993301de29c67a678209dd2347fd0adc66bc7cbde507213a16b"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.086666 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" event={"ID":"63674760-f499-49b8-a575-a8ae954eada4","Type":"ContainerStarted","Data":"8f834404cc5083f567a6fe53c7623a57002a3c6eff3c4ebfdcf86c4bd07b136b"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.092371 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cdfxh" event={"ID":"500c6ae7-fd1f-4af6-ad6c-f1db0c2af222","Type":"ContainerStarted","Data":"7adc9b250a6a76b10499f674c3e12527dfa13fd141d2706c9ebf94e73a56c217"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.096558 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xhzlb" event={"ID":"25e05c8a-335b-405c-9033-f689c21c5ecc","Type":"ContainerStarted","Data":"eb00d6cba86be6f8cddf9c0c8c12c82f35e2b4846b28e3717d2bda8599a88c6f"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.099195 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" event={"ID":"5cc06c55-7085-4cc0-8399-833b4243b51e","Type":"ContainerStarted","Data":"a1237df0c53b58e31bf7fa911823c601de88872c3b524920384c6eddfa11fa5a"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.101306 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" event={"ID":"bd317655-38ea-4fdb-95d0-82adc08456a8","Type":"ContainerStarted","Data":"cbe32080fb6211542039c554f671bd604c6b9c57b66f75dcdfda41ae07006341"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.101488 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.102978 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4nds4" event={"ID":"51a1f271-446d-42d2-b946-ad816257e990","Type":"ContainerStarted","Data":"a12468bff98271bf07b1ca74b1418dfc217b671839f3b05aa88a63a026968cf3"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.104671 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" event={"ID":"74295390-5384-402d-8c5b-dc2559bb6d9c","Type":"ContainerStarted","Data":"db08f4f480788aa31dc85e6273694f95382402643122d80f7d2b42f9a5302119"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.105956 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-zgcrd" podStartSLOduration=8.105942693 podStartE2EDuration="8.105942693s" podCreationTimestamp="2026-02-19 15:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:59.104121633 +0000 UTC m=+148.586151757" watchObservedRunningTime="2026-02-19 15:11:59.105942693 +0000 UTC m=+148.587972807" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.106113 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-h68pj" event={"ID":"969b9cb6-f89f-47e9-b8f7-754804a41dea","Type":"ContainerStarted","Data":"fe78a3f6ab8ab3705b559ae0a675b50325af5e9b16ae7e704e9077c458264031"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.106759 4810 patch_prober.go:28] interesting pod/console-operator-58897d9998-h68pj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/readyz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.106814 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-h68pj" podUID="969b9cb6-f89f-47e9-b8f7-754804a41dea" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/readyz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.107545 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" event={"ID":"b1fae478-5e9a-4da1-b2e3-35c1ee7f8fa3","Type":"ContainerStarted","Data":"25b5d07e3dfc7cdea6a77c1f385d6d71ab82685362e3b633794ae509f5b6007d"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.107713 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.109378 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" event={"ID":"026a2812-dc11-4b60-b911-bb41a0d39d7d","Type":"ContainerStarted","Data":"50e6ad8f65a86bcce9aca00f227e01c29e67e169768666571b50bcdceeff81d4"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.126988 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" event={"ID":"c696fe96-0485-44d0-b4fb-161503c334e8","Type":"ContainerStarted","Data":"d40017474d6321ea79edca666f5d7c4fe5ef4acaa79e4cf58bd8ca4e34ac6557"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.130736 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zrsn2" event={"ID":"13f4a10d-50aa-41ec-9931-cb835ba1f54c","Type":"ContainerStarted","Data":"80bf2fc295e5bc2097742d7714644f5e5d8647fe7c6739e1ddaf2d35ffcd233b"} Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.131856 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.131904 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.132119 4810 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-r2tqm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.132196 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" podUID="b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.143458 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:59 crc kubenswrapper[4810]: E0219 15:11:59.143946 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.643931929 +0000 UTC m=+149.125962053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.144535 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w6k69" podStartSLOduration=126.144515465 podStartE2EDuration="2m6.144515465s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:59.142454388 +0000 UTC m=+148.624484512" watchObservedRunningTime="2026-02-19 15:11:59.144515465 +0000 UTC m=+148.626545589" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.151701 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.151799 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2phqk" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.155515 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hmxtq" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.173883 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kmhmh" podStartSLOduration=126.173852862 podStartE2EDuration="2m6.173852862s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:59.172266168 +0000 UTC m=+148.654296292" watchObservedRunningTime="2026-02-19 15:11:59.173852862 +0000 UTC m=+148.655882986" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.191599 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mkt5x" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.251271 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:59 crc kubenswrapper[4810]: E0219 15:11:59.252522 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.752490635 +0000 UTC m=+149.234520759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.253285 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.286910 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cdfxh" podStartSLOduration=8.286871321 podStartE2EDuration="8.286871321s" podCreationTimestamp="2026-02-19 15:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:59.230748107 +0000 UTC m=+148.712778231" watchObservedRunningTime="2026-02-19 15:11:59.286871321 +0000 UTC m=+148.768901445" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.288758 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-65ltw" podStartSLOduration=126.288747082 podStartE2EDuration="2m6.288747082s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:59.264064173 +0000 UTC m=+148.746094297" watchObservedRunningTime="2026-02-19 15:11:59.288747082 +0000 UTC m=+148.770777196" Feb 19 15:11:59 crc kubenswrapper[4810]: E0219 15:11:59.289701 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.789679748 +0000 UTC m=+149.271709872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.356038 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n6fl9" podStartSLOduration=126.356003573 podStartE2EDuration="2m6.356003573s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:59.355318494 +0000 UTC m=+148.837348618" watchObservedRunningTime="2026-02-19 15:11:59.356003573 +0000 UTC m=+148.838033697" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.397725 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" podStartSLOduration=127.3977067 podStartE2EDuration="2m7.3977067s" podCreationTimestamp="2026-02-19 15:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:59.395064587 +0000 UTC m=+148.877094711" watchObservedRunningTime="2026-02-19 15:11:59.3977067 +0000 UTC m=+148.879736824" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.415396 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.415755 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.415795 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.415827 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.415856 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:59 crc kubenswrapper[4810]: E0219 15:11:59.417180 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:11:59.917130944 +0000 UTC m=+149.399161058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.418810 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.425625 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.432974 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" podStartSLOduration=126.43295202 podStartE2EDuration="2m6.43295202s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:59.431713835 +0000 UTC m=+148.913743959" watchObservedRunningTime="2026-02-19 15:11:59.43295202 +0000 UTC m=+148.914982144" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.436845 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.455417 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.470746 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.482446 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.527268 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:59 crc kubenswrapper[4810]: E0219 15:11:59.527810 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:00.027789479 +0000 UTC m=+149.509819603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.556696 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.614279 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zrsn2" podStartSLOduration=126.614261117 podStartE2EDuration="2m6.614261117s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:59.564708924 +0000 UTC m=+149.046739038" watchObservedRunningTime="2026-02-19 15:11:59.614261117 +0000 UTC m=+149.096291241" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.629073 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:59 crc kubenswrapper[4810]: E0219 15:11:59.629698 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:00.129679822 +0000 UTC m=+149.611709946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.653135 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-frmnw" podStartSLOduration=126.653113356 podStartE2EDuration="2m6.653113356s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:59.649827286 +0000 UTC m=+149.131857410" watchObservedRunningTime="2026-02-19 15:11:59.653113356 +0000 UTC m=+149.135143480" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.702979 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" podStartSLOduration=126.702958708 podStartE2EDuration="2m6.702958708s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:59.694657229 +0000 UTC m=+149.176687353" watchObservedRunningTime="2026-02-19 15:11:59.702958708 +0000 UTC m=+149.184988832" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.741169 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:59 crc kubenswrapper[4810]: E0219 15:11:59.741537 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:00.241524318 +0000 UTC m=+149.723554442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.827296 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9lqqd" podStartSLOduration=126.827275177 podStartE2EDuration="2m6.827275177s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:11:59.82590776 +0000 UTC m=+149.307937894" watchObservedRunningTime="2026-02-19 15:11:59.827275177 +0000 UTC m=+149.309305301" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.842174 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:11:59 crc kubenswrapper[4810]: E0219 15:11:59.842582 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:00.342560148 +0000 UTC m=+149.824590272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.920369 4810 patch_prober.go:28] interesting pod/router-default-5444994796-hvw7f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 15:11:59 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Feb 19 15:11:59 crc kubenswrapper[4810]: [+]process-running ok Feb 19 15:11:59 crc kubenswrapper[4810]: healthz check failed Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.920441 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvw7f" podUID="76ffdcba-57d6-4636-8373-f088926a716d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 15:11:59 crc kubenswrapper[4810]: I0219 15:11:59.944206 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:11:59 crc kubenswrapper[4810]: E0219 15:11:59.944589 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:00.444574264 +0000 UTC m=+149.926604388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.046788 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:00 crc kubenswrapper[4810]: E0219 15:12:00.047161 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:00.547141716 +0000 UTC m=+150.029171840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.134420 4810 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-r74mv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.33:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.134489 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.33:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.148456 4810 patch_prober.go:28] interesting pod/console-operator-58897d9998-h68pj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/readyz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.148518 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-h68pj" podUID="969b9cb6-f89f-47e9-b8f7-754804a41dea" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/readyz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.149127 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:00 crc kubenswrapper[4810]: E0219 15:12:00.149482 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:00.649467311 +0000 UTC m=+150.131497435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.250036 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:00 crc kubenswrapper[4810]: E0219 15:12:00.251647 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:00.751616511 +0000 UTC m=+150.233646635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.353493 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:00 crc kubenswrapper[4810]: E0219 15:12:00.354358 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:00.854342647 +0000 UTC m=+150.336372771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.447054 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rk4vw"] Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.448262 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.457102 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:00 crc kubenswrapper[4810]: E0219 15:12:00.457587 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:00.957565057 +0000 UTC m=+150.439595181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.471271 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rk4vw"] Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.471568 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.506868 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.562691 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf3315d-3d2f-4aeb-b925-c3832e102e85-catalog-content\") pod \"certified-operators-rk4vw\" (UID: \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\") " pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.562810 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.562837 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf3315d-3d2f-4aeb-b925-c3832e102e85-utilities\") pod \"certified-operators-rk4vw\" (UID: \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\") " pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.562871 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqzns\" (UniqueName: \"kubernetes.io/projected/3bf3315d-3d2f-4aeb-b925-c3832e102e85-kube-api-access-fqzns\") pod \"certified-operators-rk4vw\" (UID: \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\") " pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:00 crc kubenswrapper[4810]: E0219 15:12:00.569548 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:01.069534397 +0000 UTC m=+150.551564511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:00 crc kubenswrapper[4810]: W0219 15:12:00.579500 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-570a8297512c532cdc027f9e5ab8b517aab8acf061999ac36570f8d6d0967282 WatchSource:0}: Error finding container 570a8297512c532cdc027f9e5ab8b517aab8acf061999ac36570f8d6d0967282: Status 404 returned error can't find the container with id 570a8297512c532cdc027f9e5ab8b517aab8acf061999ac36570f8d6d0967282 Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.615105 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d5ks5"] Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.616586 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.626436 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.666539 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.666809 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf3315d-3d2f-4aeb-b925-c3832e102e85-utilities\") pod \"certified-operators-rk4vw\" (UID: \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\") " pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.666847 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqzns\" (UniqueName: \"kubernetes.io/projected/3bf3315d-3d2f-4aeb-b925-c3832e102e85-kube-api-access-fqzns\") pod \"certified-operators-rk4vw\" (UID: \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\") " pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.666883 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf3315d-3d2f-4aeb-b925-c3832e102e85-catalog-content\") pod \"certified-operators-rk4vw\" (UID: \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\") " pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.667394 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf3315d-3d2f-4aeb-b925-c3832e102e85-catalog-content\") pod \"certified-operators-rk4vw\" (UID: \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\") " pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:00 crc kubenswrapper[4810]: E0219 15:12:00.667483 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:01.167463911 +0000 UTC m=+150.649494025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.667696 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf3315d-3d2f-4aeb-b925-c3832e102e85-utilities\") pod \"certified-operators-rk4vw\" (UID: \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\") " pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.684601 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5ks5"] Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.724573 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqzns\" (UniqueName: \"kubernetes.io/projected/3bf3315d-3d2f-4aeb-b925-c3832e102e85-kube-api-access-fqzns\") pod \"certified-operators-rk4vw\" (UID: \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\") " pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.769862 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-utilities\") pod \"community-operators-d5ks5\" (UID: \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\") " pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.769936 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.769964 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqz6g\" (UniqueName: \"kubernetes.io/projected/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-kube-api-access-cqz6g\") pod \"community-operators-d5ks5\" (UID: \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\") " pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.770008 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-catalog-content\") pod \"community-operators-d5ks5\" (UID: \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\") " pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:12:00 crc kubenswrapper[4810]: E0219 15:12:00.770352 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:01.270338211 +0000 UTC m=+150.752368335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:00 crc kubenswrapper[4810]: W0219 15:12:00.791265 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-baad19be76aa3ec4b98ca06f5e19af7486a086e59048c2e0cba45eebcb02a1c0 WatchSource:0}: Error finding container baad19be76aa3ec4b98ca06f5e19af7486a086e59048c2e0cba45eebcb02a1c0: Status 404 returned error can't find the container with id baad19be76aa3ec4b98ca06f5e19af7486a086e59048c2e0cba45eebcb02a1c0 Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.802512 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-blpmq"] Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.806816 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.836353 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-blpmq"] Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.868261 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.873316 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.873593 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqz6g\" (UniqueName: \"kubernetes.io/projected/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-kube-api-access-cqz6g\") pod \"community-operators-d5ks5\" (UID: \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\") " pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.873622 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4127fef2-ef2b-4cc4-967d-d52dac26f314-utilities\") pod \"certified-operators-blpmq\" (UID: \"4127fef2-ef2b-4cc4-967d-d52dac26f314\") " pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.873650 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4127fef2-ef2b-4cc4-967d-d52dac26f314-catalog-content\") pod \"certified-operators-blpmq\" (UID: \"4127fef2-ef2b-4cc4-967d-d52dac26f314\") " pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.873684 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-catalog-content\") pod \"community-operators-d5ks5\" (UID: \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\") " pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.873721 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-utilities\") pod \"community-operators-d5ks5\" (UID: \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\") " pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.873740 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thvr6\" (UniqueName: \"kubernetes.io/projected/4127fef2-ef2b-4cc4-967d-d52dac26f314-kube-api-access-thvr6\") pod \"certified-operators-blpmq\" (UID: \"4127fef2-ef2b-4cc4-967d-d52dac26f314\") " pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:12:00 crc kubenswrapper[4810]: E0219 15:12:00.873851 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:01.373832598 +0000 UTC m=+150.855862722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.874494 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-catalog-content\") pod \"community-operators-d5ks5\" (UID: \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\") " pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.874694 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-utilities\") pod \"community-operators-d5ks5\" (UID: \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\") " pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.921473 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqz6g\" (UniqueName: \"kubernetes.io/projected/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-kube-api-access-cqz6g\") pod \"community-operators-d5ks5\" (UID: \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\") " pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.928800 4810 patch_prober.go:28] interesting pod/router-default-5444994796-hvw7f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 15:12:00 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Feb 19 15:12:00 crc kubenswrapper[4810]: [+]process-running ok Feb 19 15:12:00 crc kubenswrapper[4810]: healthz check failed Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.928856 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvw7f" podUID="76ffdcba-57d6-4636-8373-f088926a716d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.976506 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thvr6\" (UniqueName: \"kubernetes.io/projected/4127fef2-ef2b-4cc4-967d-d52dac26f314-kube-api-access-thvr6\") pod \"certified-operators-blpmq\" (UID: \"4127fef2-ef2b-4cc4-967d-d52dac26f314\") " pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.976571 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.976599 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4127fef2-ef2b-4cc4-967d-d52dac26f314-utilities\") pod \"certified-operators-blpmq\" (UID: \"4127fef2-ef2b-4cc4-967d-d52dac26f314\") " pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.976631 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4127fef2-ef2b-4cc4-967d-d52dac26f314-catalog-content\") pod \"certified-operators-blpmq\" (UID: \"4127fef2-ef2b-4cc4-967d-d52dac26f314\") " pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.977017 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4127fef2-ef2b-4cc4-967d-d52dac26f314-catalog-content\") pod \"certified-operators-blpmq\" (UID: \"4127fef2-ef2b-4cc4-967d-d52dac26f314\") " pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:12:00 crc kubenswrapper[4810]: E0219 15:12:00.977026 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:01.477006917 +0000 UTC m=+150.959037041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:00 crc kubenswrapper[4810]: I0219 15:12:00.977271 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4127fef2-ef2b-4cc4-967d-d52dac26f314-utilities\") pod \"certified-operators-blpmq\" (UID: \"4127fef2-ef2b-4cc4-967d-d52dac26f314\") " pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.000575 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x8sn2"] Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.001962 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.019783 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.052905 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thvr6\" (UniqueName: \"kubernetes.io/projected/4127fef2-ef2b-4cc4-967d-d52dac26f314-kube-api-access-thvr6\") pod \"certified-operators-blpmq\" (UID: \"4127fef2-ef2b-4cc4-967d-d52dac26f314\") " pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.053236 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x8sn2"] Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.089900 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.090141 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8ce195-1fe1-4684-8172-e710b3552fb5-catalog-content\") pod \"community-operators-x8sn2\" (UID: \"cc8ce195-1fe1-4684-8172-e710b3552fb5\") " pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.090173 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8ce195-1fe1-4684-8172-e710b3552fb5-utilities\") pod \"community-operators-x8sn2\" (UID: \"cc8ce195-1fe1-4684-8172-e710b3552fb5\") " pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.090205 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xzxn\" (UniqueName: \"kubernetes.io/projected/cc8ce195-1fe1-4684-8172-e710b3552fb5-kube-api-access-4xzxn\") pod \"community-operators-x8sn2\" (UID: \"cc8ce195-1fe1-4684-8172-e710b3552fb5\") " pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:12:01 crc kubenswrapper[4810]: E0219 15:12:01.090311 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:01.590291593 +0000 UTC m=+151.072321717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.135587 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.192011 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8ce195-1fe1-4684-8172-e710b3552fb5-catalog-content\") pod \"community-operators-x8sn2\" (UID: \"cc8ce195-1fe1-4684-8172-e710b3552fb5\") " pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.192062 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8ce195-1fe1-4684-8172-e710b3552fb5-utilities\") pod \"community-operators-x8sn2\" (UID: \"cc8ce195-1fe1-4684-8172-e710b3552fb5\") " pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.192086 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.192113 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xzxn\" (UniqueName: \"kubernetes.io/projected/cc8ce195-1fe1-4684-8172-e710b3552fb5-kube-api-access-4xzxn\") pod \"community-operators-x8sn2\" (UID: \"cc8ce195-1fe1-4684-8172-e710b3552fb5\") " pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.192818 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8ce195-1fe1-4684-8172-e710b3552fb5-catalog-content\") pod \"community-operators-x8sn2\" (UID: \"cc8ce195-1fe1-4684-8172-e710b3552fb5\") " pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.193032 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8ce195-1fe1-4684-8172-e710b3552fb5-utilities\") pod \"community-operators-x8sn2\" (UID: \"cc8ce195-1fe1-4684-8172-e710b3552fb5\") " pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:12:01 crc kubenswrapper[4810]: E0219 15:12:01.193261 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:01.693251075 +0000 UTC m=+151.175281199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.194547 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"93d461b585c01439340c8c9488368a4a7839b7c75ac33dcc961a4492e523cd98"} Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.194610 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"570a8297512c532cdc027f9e5ab8b517aab8acf061999ac36570f8d6d0967282"} Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.208126 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"baad19be76aa3ec4b98ca06f5e19af7486a086e59048c2e0cba45eebcb02a1c0"} Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.209551 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4nds4" event={"ID":"51a1f271-446d-42d2-b946-ad816257e990","Type":"ContainerStarted","Data":"d0072ff36543b17d469c813991b0fb32b7a39ed74b27ca4f0397f845c9c9b41b"} Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.214468 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"07af0fda8448a35fda3d865973906afaea74b6956c2198bcbe319d672854e33f"} Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.214515 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"aa410fc7de0d5cf567d6d79f919b256f04936d22b1fff2a8b2883c5a0cba291b"} Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.215038 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.246067 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xzxn\" (UniqueName: \"kubernetes.io/projected/cc8ce195-1fe1-4684-8172-e710b3552fb5-kube-api-access-4xzxn\") pod \"community-operators-x8sn2\" (UID: \"cc8ce195-1fe1-4684-8172-e710b3552fb5\") " pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.294968 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:01 crc kubenswrapper[4810]: E0219 15:12:01.296475 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:01.796451045 +0000 UTC m=+151.278481169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.352861 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.397882 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:01 crc kubenswrapper[4810]: E0219 15:12:01.398201 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:01.898189083 +0000 UTC m=+151.380219207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.500876 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:01 crc kubenswrapper[4810]: E0219 15:12:01.501703 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:02.001685321 +0000 UTC m=+151.483715445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.588848 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rk4vw"] Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.603853 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:01 crc kubenswrapper[4810]: E0219 15:12:01.604201 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:02.10418845 +0000 UTC m=+151.586218574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.704762 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:01 crc kubenswrapper[4810]: E0219 15:12:01.705233 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:02.20521396 +0000 UTC m=+151.687244084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.744129 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5ks5"] Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.808577 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:01 crc kubenswrapper[4810]: E0219 15:12:01.809096 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:02.309077077 +0000 UTC m=+151.791107211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.874592 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-blpmq"] Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.900663 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.901717 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.909346 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:01 crc kubenswrapper[4810]: E0219 15:12:01.909580 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:02.40955116 +0000 UTC m=+151.891581284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.909703 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:01 crc kubenswrapper[4810]: E0219 15:12:01.910060 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:02.410048464 +0000 UTC m=+151.892078588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.910550 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.911230 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.915464 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.927485 4810 patch_prober.go:28] interesting pod/router-default-5444994796-hvw7f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 15:12:01 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Feb 19 15:12:01 crc kubenswrapper[4810]: [+]process-running ok Feb 19 15:12:01 crc kubenswrapper[4810]: healthz check failed Feb 19 15:12:01 crc kubenswrapper[4810]: I0219 15:12:01.927535 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvw7f" podUID="76ffdcba-57d6-4636-8373-f088926a716d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.010528 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.010818 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bf4faab-dab6-498d-bf73-7a740448c64b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bf4faab-dab6-498d-bf73-7a740448c64b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.010859 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bf4faab-dab6-498d-bf73-7a740448c64b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bf4faab-dab6-498d-bf73-7a740448c64b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 15:12:02 crc kubenswrapper[4810]: E0219 15:12:02.011007 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:02.510989211 +0000 UTC m=+151.993019335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.113892 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bf4faab-dab6-498d-bf73-7a740448c64b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bf4faab-dab6-498d-bf73-7a740448c64b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.114415 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.114438 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bf4faab-dab6-498d-bf73-7a740448c64b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bf4faab-dab6-498d-bf73-7a740448c64b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.114886 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bf4faab-dab6-498d-bf73-7a740448c64b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bf4faab-dab6-498d-bf73-7a740448c64b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 15:12:02 crc kubenswrapper[4810]: E0219 15:12:02.115221 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:02.615207608 +0000 UTC m=+152.097237732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.170001 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bf4faab-dab6-498d-bf73-7a740448c64b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bf4faab-dab6-498d-bf73-7a740448c64b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.217931 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:02 crc kubenswrapper[4810]: E0219 15:12:02.218259 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:02.718242232 +0000 UTC m=+152.200272356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.234950 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4nds4" event={"ID":"51a1f271-446d-42d2-b946-ad816257e990","Type":"ContainerStarted","Data":"399d89cabfd6cf6cada79838bfbcfea0a2a7903c17e611bc093418cf9fe0ed03"} Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.236984 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5ks5" event={"ID":"9a3d6b1f-2011-4f7f-bea0-1d303007fe41","Type":"ContainerStarted","Data":"1962f43ea8735830650ac9b311ce674cd5cebcb42c9922bc390ae19775d9f9f0"} Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.237023 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5ks5" event={"ID":"9a3d6b1f-2011-4f7f-bea0-1d303007fe41","Type":"ContainerStarted","Data":"c3d08bc3ddaa041e0392052ef7f026d6557f47271ad22913b60948a058b74b85"} Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.240774 4810 generic.go:334] "Generic (PLEG): container finished" podID="8383a9e3-149b-4512-a9fd-12cd0b65e370" containerID="c19d21352b758655d40944eafe4d1d6cfee80125c13e6f74424f93ccd9aee7cf" exitCode=0 Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.240826 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" event={"ID":"8383a9e3-149b-4512-a9fd-12cd0b65e370","Type":"ContainerDied","Data":"c19d21352b758655d40944eafe4d1d6cfee80125c13e6f74424f93ccd9aee7cf"} Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.242091 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blpmq" event={"ID":"4127fef2-ef2b-4cc4-967d-d52dac26f314","Type":"ContainerStarted","Data":"4442c4b83b02a776b63ed28285fde96beca3df86a20a033d8feb27311a4298e1"} Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.243105 4810 generic.go:334] "Generic (PLEG): container finished" podID="3bf3315d-3d2f-4aeb-b925-c3832e102e85" containerID="dca6d9a99a30ff4bef03f7e86c179f1a1309a876ad27e10ee78ace680fa82510" exitCode=0 Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.243146 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4vw" event={"ID":"3bf3315d-3d2f-4aeb-b925-c3832e102e85","Type":"ContainerDied","Data":"dca6d9a99a30ff4bef03f7e86c179f1a1309a876ad27e10ee78ace680fa82510"} Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.243174 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4vw" event={"ID":"3bf3315d-3d2f-4aeb-b925-c3832e102e85","Type":"ContainerStarted","Data":"2a93b7168fafbe84b16d4aeee817860063438d02590293e9edf6bad1699c168a"} Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.244442 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.257276 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"72d421e29d3b69f3ec6be0bc92465003b9eef087291dccffe7e971f2ede9604f"} Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.268689 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.306506 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x8sn2"] Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.324082 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:02 crc kubenswrapper[4810]: E0219 15:12:02.327705 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:02.827668342 +0000 UTC m=+152.309698466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.396272 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ptbh9"] Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.398893 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.404926 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.418491 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptbh9"] Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.447386 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.447653 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbxf5\" (UniqueName: \"kubernetes.io/projected/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-kube-api-access-gbxf5\") pod \"redhat-marketplace-ptbh9\" (UID: \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\") " pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.447694 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-utilities\") pod \"redhat-marketplace-ptbh9\" (UID: \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\") " pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.447732 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-catalog-content\") pod \"redhat-marketplace-ptbh9\" (UID: \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\") " pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:12:02 crc kubenswrapper[4810]: E0219 15:12:02.447847 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 15:12:02.947831448 +0000 UTC m=+152.429861572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.548872 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbxf5\" (UniqueName: \"kubernetes.io/projected/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-kube-api-access-gbxf5\") pod \"redhat-marketplace-ptbh9\" (UID: \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\") " pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.549221 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-utilities\") pod \"redhat-marketplace-ptbh9\" (UID: \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\") " pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.549258 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-catalog-content\") pod \"redhat-marketplace-ptbh9\" (UID: \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\") " pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.549304 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:02 crc kubenswrapper[4810]: E0219 15:12:02.549663 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 15:12:03.049642229 +0000 UTC m=+152.531672353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7bnq2" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.550026 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-utilities\") pod \"redhat-marketplace-ptbh9\" (UID: \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\") " pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.550373 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-catalog-content\") pod \"redhat-marketplace-ptbh9\" (UID: \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\") " pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.572987 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbxf5\" (UniqueName: \"kubernetes.io/projected/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-kube-api-access-gbxf5\") pod \"redhat-marketplace-ptbh9\" (UID: \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\") " pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.603406 4810 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.622902 4810 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T15:12:02.603444679Z","Handler":null,"Name":""} Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.639584 4810 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.639971 4810 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.651986 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.673672 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.725663 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-25cmt" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.726830 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.730604 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.739029 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5d4rp" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.762632 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.831418 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.832870 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dfqg8"] Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.868220 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.903161 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfqg8"] Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.945210 4810 patch_prober.go:28] interesting pod/router-default-5444994796-hvw7f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 15:12:02 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Feb 19 15:12:02 crc kubenswrapper[4810]: [+]process-running ok Feb 19 15:12:02 crc kubenswrapper[4810]: healthz check failed Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.945276 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvw7f" podUID="76ffdcba-57d6-4636-8373-f088926a716d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.969058 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4nld\" (UniqueName: \"kubernetes.io/projected/5c654206-f2d0-4b40-9df0-577dbf27e5e4-kube-api-access-r4nld\") pod \"redhat-marketplace-dfqg8\" (UID: \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\") " pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.969141 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c654206-f2d0-4b40-9df0-577dbf27e5e4-catalog-content\") pod \"redhat-marketplace-dfqg8\" (UID: \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\") " pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.969196 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c654206-f2d0-4b40-9df0-577dbf27e5e4-utilities\") pod \"redhat-marketplace-dfqg8\" (UID: \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\") " pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.976344 4810 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 15:12:02 crc kubenswrapper[4810]: I0219 15:12:02.976384 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.072044 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c654206-f2d0-4b40-9df0-577dbf27e5e4-utilities\") pod \"redhat-marketplace-dfqg8\" (UID: \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\") " pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.072111 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4nld\" (UniqueName: \"kubernetes.io/projected/5c654206-f2d0-4b40-9df0-577dbf27e5e4-kube-api-access-r4nld\") pod \"redhat-marketplace-dfqg8\" (UID: \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\") " pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.072155 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c654206-f2d0-4b40-9df0-577dbf27e5e4-catalog-content\") pod \"redhat-marketplace-dfqg8\" (UID: \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\") " pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.072686 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c654206-f2d0-4b40-9df0-577dbf27e5e4-catalog-content\") pod \"redhat-marketplace-dfqg8\" (UID: \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\") " pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.072919 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c654206-f2d0-4b40-9df0-577dbf27e5e4-utilities\") pod \"redhat-marketplace-dfqg8\" (UID: \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\") " pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.125409 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4nld\" (UniqueName: \"kubernetes.io/projected/5c654206-f2d0-4b40-9df0-577dbf27e5e4-kube-api-access-r4nld\") pod \"redhat-marketplace-dfqg8\" (UID: \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\") " pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.183876 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7bnq2\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.219836 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.283529 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptbh9"] Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.303762 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.304838 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.320172 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.320250 4810 generic.go:334] "Generic (PLEG): container finished" podID="cc8ce195-1fe1-4684-8172-e710b3552fb5" containerID="5c922635ea487f2a0968c5eace1227e4dcace0fce70a7ca4e66511c946968284" exitCode=0 Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.320393 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8sn2" event={"ID":"cc8ce195-1fe1-4684-8172-e710b3552fb5","Type":"ContainerDied","Data":"5c922635ea487f2a0968c5eace1227e4dcace0fce70a7ca4e66511c946968284"} Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.320473 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8sn2" event={"ID":"cc8ce195-1fe1-4684-8172-e710b3552fb5","Type":"ContainerStarted","Data":"b765fd7f2fdc89df238064ee08672584bc5ff6e6cd24e9a1fe430dad87064297"} Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.325150 4810 generic.go:334] "Generic (PLEG): container finished" podID="4127fef2-ef2b-4cc4-967d-d52dac26f314" containerID="01e828c8531030a2de34cc2f43410bf4a69a3af4e19f1adb2dbd098f1f78eca6" exitCode=0 Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.325370 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blpmq" event={"ID":"4127fef2-ef2b-4cc4-967d-d52dac26f314","Type":"ContainerDied","Data":"01e828c8531030a2de34cc2f43410bf4a69a3af4e19f1adb2dbd098f1f78eca6"} Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.337140 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9bf4faab-dab6-498d-bf73-7a740448c64b","Type":"ContainerStarted","Data":"b81cbe17b0c85b33afea4e73bfe0c252ec94240f8fe691abb4c277e1104d95b3"} Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.340176 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4nds4" event={"ID":"51a1f271-446d-42d2-b946-ad816257e990","Type":"ContainerStarted","Data":"efb67ae02881bb15abecd46f79301f6c12b8a8783efc2371e4daa8fc6157156e"} Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.352253 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.373454 4810 generic.go:334] "Generic (PLEG): container finished" podID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" containerID="1962f43ea8735830650ac9b311ce674cd5cebcb42c9922bc390ae19775d9f9f0" exitCode=0 Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.374823 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5ks5" event={"ID":"9a3d6b1f-2011-4f7f-bea0-1d303007fe41","Type":"ContainerDied","Data":"1962f43ea8735830650ac9b311ce674cd5cebcb42c9922bc390ae19775d9f9f0"} Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.390541 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4nds4" podStartSLOduration=12.390520801 podStartE2EDuration="12.390520801s" podCreationTimestamp="2026-02-19 15:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:12:03.371704084 +0000 UTC m=+152.853734208" watchObservedRunningTime="2026-02-19 15:12:03.390520801 +0000 UTC m=+152.872550925" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.532046 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.582947 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.583475 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.585609 4810 patch_prober.go:28] interesting pod/console-f9d7485db-4hddt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.585734 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4hddt" podUID="362cd55c-b576-44bd-843c-078bf26b3b1e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.586949 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.586987 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.587070 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.587086 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.610792 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gp8sg"] Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.613413 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.619974 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.623208 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gp8sg"] Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.673197 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfqg8"] Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.686435 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3146bc9a-c4fc-4aa1-acae-032db4aa0582-catalog-content\") pod \"redhat-operators-gp8sg\" (UID: \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\") " pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.686535 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b45t8\" (UniqueName: \"kubernetes.io/projected/3146bc9a-c4fc-4aa1-acae-032db4aa0582-kube-api-access-b45t8\") pod \"redhat-operators-gp8sg\" (UID: \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\") " pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.686561 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3146bc9a-c4fc-4aa1-acae-032db4aa0582-utilities\") pod \"redhat-operators-gp8sg\" (UID: \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\") " pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.789162 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3146bc9a-c4fc-4aa1-acae-032db4aa0582-catalog-content\") pod \"redhat-operators-gp8sg\" (UID: \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\") " pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.789724 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b45t8\" (UniqueName: \"kubernetes.io/projected/3146bc9a-c4fc-4aa1-acae-032db4aa0582-kube-api-access-b45t8\") pod \"redhat-operators-gp8sg\" (UID: \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\") " pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.790963 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3146bc9a-c4fc-4aa1-acae-032db4aa0582-utilities\") pod \"redhat-operators-gp8sg\" (UID: \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\") " pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.792011 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3146bc9a-c4fc-4aa1-acae-032db4aa0582-utilities\") pod \"redhat-operators-gp8sg\" (UID: \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\") " pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.792728 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3146bc9a-c4fc-4aa1-acae-032db4aa0582-catalog-content\") pod \"redhat-operators-gp8sg\" (UID: \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\") " pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.851956 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b45t8\" (UniqueName: \"kubernetes.io/projected/3146bc9a-c4fc-4aa1-acae-032db4aa0582-kube-api-access-b45t8\") pod \"redhat-operators-gp8sg\" (UID: \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\") " pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.916363 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.923370 4810 patch_prober.go:28] interesting pod/router-default-5444994796-hvw7f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 15:12:03 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Feb 19 15:12:03 crc kubenswrapper[4810]: [+]process-running ok Feb 19 15:12:03 crc kubenswrapper[4810]: healthz check failed Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.923423 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvw7f" podUID="76ffdcba-57d6-4636-8373-f088926a716d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.933951 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" Feb 19 15:12:03 crc kubenswrapper[4810]: I0219 15:12:03.979304 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.014071 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8383a9e3-149b-4512-a9fd-12cd0b65e370-secret-volume\") pod \"8383a9e3-149b-4512-a9fd-12cd0b65e370\" (UID: \"8383a9e3-149b-4512-a9fd-12cd0b65e370\") " Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.014124 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdm7r\" (UniqueName: \"kubernetes.io/projected/8383a9e3-149b-4512-a9fd-12cd0b65e370-kube-api-access-jdm7r\") pod \"8383a9e3-149b-4512-a9fd-12cd0b65e370\" (UID: \"8383a9e3-149b-4512-a9fd-12cd0b65e370\") " Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.014151 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8383a9e3-149b-4512-a9fd-12cd0b65e370-config-volume\") pod \"8383a9e3-149b-4512-a9fd-12cd0b65e370\" (UID: \"8383a9e3-149b-4512-a9fd-12cd0b65e370\") " Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.024639 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8383a9e3-149b-4512-a9fd-12cd0b65e370-kube-api-access-jdm7r" (OuterVolumeSpecName: "kube-api-access-jdm7r") pod "8383a9e3-149b-4512-a9fd-12cd0b65e370" (UID: "8383a9e3-149b-4512-a9fd-12cd0b65e370"). InnerVolumeSpecName "kube-api-access-jdm7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.024727 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8383a9e3-149b-4512-a9fd-12cd0b65e370-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8383a9e3-149b-4512-a9fd-12cd0b65e370" (UID: "8383a9e3-149b-4512-a9fd-12cd0b65e370"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.027255 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8383a9e3-149b-4512-a9fd-12cd0b65e370-config-volume" (OuterVolumeSpecName: "config-volume") pod "8383a9e3-149b-4512-a9fd-12cd0b65e370" (UID: "8383a9e3-149b-4512-a9fd-12cd0b65e370"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.028577 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v87ss"] Feb 19 15:12:04 crc kubenswrapper[4810]: E0219 15:12:04.028868 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8383a9e3-149b-4512-a9fd-12cd0b65e370" containerName="collect-profiles" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.028890 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8383a9e3-149b-4512-a9fd-12cd0b65e370" containerName="collect-profiles" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.029027 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="8383a9e3-149b-4512-a9fd-12cd0b65e370" containerName="collect-profiles" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.030001 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:12:04 crc kubenswrapper[4810]: W0219 15:12:04.046180 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6cb5092_5f01_4dd9_a940_804d88907744.slice/crio-958cccf7fb34847e1b79d6753f3c0b3ac89a306a0da6aa21a70c7c233870987b WatchSource:0}: Error finding container 958cccf7fb34847e1b79d6753f3c0b3ac89a306a0da6aa21a70c7c233870987b: Status 404 returned error can't find the container with id 958cccf7fb34847e1b79d6753f3c0b3ac89a306a0da6aa21a70c7c233870987b Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.055564 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7bnq2"] Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.061425 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v87ss"] Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.115414 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbjkj\" (UniqueName: \"kubernetes.io/projected/ee54de34-1c90-401d-8102-2cc1e4116661-kube-api-access-fbjkj\") pod \"redhat-operators-v87ss\" (UID: \"ee54de34-1c90-401d-8102-2cc1e4116661\") " pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.115500 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee54de34-1c90-401d-8102-2cc1e4116661-catalog-content\") pod \"redhat-operators-v87ss\" (UID: \"ee54de34-1c90-401d-8102-2cc1e4116661\") " pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.115537 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee54de34-1c90-401d-8102-2cc1e4116661-utilities\") pod \"redhat-operators-v87ss\" (UID: \"ee54de34-1c90-401d-8102-2cc1e4116661\") " pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.115578 4810 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8383a9e3-149b-4512-a9fd-12cd0b65e370-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.115592 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdm7r\" (UniqueName: \"kubernetes.io/projected/8383a9e3-149b-4512-a9fd-12cd0b65e370-kube-api-access-jdm7r\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.115600 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8383a9e3-149b-4512-a9fd-12cd0b65e370-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.216743 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbjkj\" (UniqueName: \"kubernetes.io/projected/ee54de34-1c90-401d-8102-2cc1e4116661-kube-api-access-fbjkj\") pod \"redhat-operators-v87ss\" (UID: \"ee54de34-1c90-401d-8102-2cc1e4116661\") " pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.216843 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee54de34-1c90-401d-8102-2cc1e4116661-catalog-content\") pod \"redhat-operators-v87ss\" (UID: \"ee54de34-1c90-401d-8102-2cc1e4116661\") " pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.217021 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee54de34-1c90-401d-8102-2cc1e4116661-utilities\") pod \"redhat-operators-v87ss\" (UID: \"ee54de34-1c90-401d-8102-2cc1e4116661\") " pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.217916 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee54de34-1c90-401d-8102-2cc1e4116661-utilities\") pod \"redhat-operators-v87ss\" (UID: \"ee54de34-1c90-401d-8102-2cc1e4116661\") " pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.218179 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee54de34-1c90-401d-8102-2cc1e4116661-catalog-content\") pod \"redhat-operators-v87ss\" (UID: \"ee54de34-1c90-401d-8102-2cc1e4116661\") " pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.246436 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbjkj\" (UniqueName: \"kubernetes.io/projected/ee54de34-1c90-401d-8102-2cc1e4116661-kube-api-access-fbjkj\") pod \"redhat-operators-v87ss\" (UID: \"ee54de34-1c90-401d-8102-2cc1e4116661\") " pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.320594 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gp8sg"] Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.352156 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.378612 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.393404 4810 generic.go:334] "Generic (PLEG): container finished" podID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" containerID="50a196ba034be9702770fc3245e22281a78913a08bd60f8b507db74c90490792" exitCode=0 Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.393521 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptbh9" event={"ID":"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53","Type":"ContainerDied","Data":"50a196ba034be9702770fc3245e22281a78913a08bd60f8b507db74c90490792"} Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.393550 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptbh9" event={"ID":"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53","Type":"ContainerStarted","Data":"585fb3dde24254534020203ea59c98681809ba91054fff093daf819845736af6"} Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.399316 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" event={"ID":"8383a9e3-149b-4512-a9fd-12cd0b65e370","Type":"ContainerDied","Data":"28491df923b9f9d021efc8a5dd0472dd72e0e7e179d81a8ba24dae22fa983519"} Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.399360 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28491df923b9f9d021efc8a5dd0472dd72e0e7e179d81a8ba24dae22fa983519" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.399377 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.419630 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-h68pj" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.439399 4810 generic.go:334] "Generic (PLEG): container finished" podID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" containerID="ae8bc998d14d3fe5b46f631e4cd6f287af277e6334648d3823ae6a448b5c6c06" exitCode=0 Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.439459 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfqg8" event={"ID":"5c654206-f2d0-4b40-9df0-577dbf27e5e4","Type":"ContainerDied","Data":"ae8bc998d14d3fe5b46f631e4cd6f287af277e6334648d3823ae6a448b5c6c06"} Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.439477 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfqg8" event={"ID":"5c654206-f2d0-4b40-9df0-577dbf27e5e4","Type":"ContainerStarted","Data":"673ebc55d1a6b447cd1eff3908534a0236e7f9bed8a79855c901abb60e30a35e"} Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.466202 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gp8sg" event={"ID":"3146bc9a-c4fc-4aa1-acae-032db4aa0582","Type":"ContainerStarted","Data":"8f98c34b7848371876e8226d32a5b9a72fad3efa01f7370b3de8b257667df91e"} Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.489404 4810 generic.go:334] "Generic (PLEG): container finished" podID="9bf4faab-dab6-498d-bf73-7a740448c64b" containerID="24c3c0e67f6cb8d737307870275884530deb2d7f9328c8dbf07a497146ef635e" exitCode=0 Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.489846 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9bf4faab-dab6-498d-bf73-7a740448c64b","Type":"ContainerDied","Data":"24c3c0e67f6cb8d737307870275884530deb2d7f9328c8dbf07a497146ef635e"} Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.513298 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" event={"ID":"a6cb5092-5f01-4dd9-a940-804d88907744","Type":"ContainerStarted","Data":"5251301af1540af7d5ebbbedf95175c9c2ed8c2deb1460dfe0ea2da0c4971799"} Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.513368 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" event={"ID":"a6cb5092-5f01-4dd9-a940-804d88907744","Type":"ContainerStarted","Data":"958cccf7fb34847e1b79d6753f3c0b3ac89a306a0da6aa21a70c7c233870987b"} Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.515233 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.527143 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q6zwc" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.560098 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" podStartSLOduration=131.560078886 podStartE2EDuration="2m11.560078886s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:12:04.555451958 +0000 UTC m=+154.037482082" watchObservedRunningTime="2026-02-19 15:12:04.560078886 +0000 UTC m=+154.042109000" Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.802494 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v87ss"] Feb 19 15:12:04 crc kubenswrapper[4810]: W0219 15:12:04.853634 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee54de34_1c90_401d_8102_2cc1e4116661.slice/crio-e9e0d068483405f5413181ac3893890a639ed339e87615e73aeb8a60ceb19c52 WatchSource:0}: Error finding container e9e0d068483405f5413181ac3893890a639ed339e87615e73aeb8a60ceb19c52: Status 404 returned error can't find the container with id e9e0d068483405f5413181ac3893890a639ed339e87615e73aeb8a60ceb19c52 Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.920084 4810 patch_prober.go:28] interesting pod/router-default-5444994796-hvw7f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 15:12:04 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Feb 19 15:12:04 crc kubenswrapper[4810]: [+]process-running ok Feb 19 15:12:04 crc kubenswrapper[4810]: healthz check failed Feb 19 15:12:04 crc kubenswrapper[4810]: I0219 15:12:04.920175 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hvw7f" podUID="76ffdcba-57d6-4636-8373-f088926a716d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 15:12:05 crc kubenswrapper[4810]: I0219 15:12:05.532507 4810 generic.go:334] "Generic (PLEG): container finished" podID="3146bc9a-c4fc-4aa1-acae-032db4aa0582" containerID="db4a068bb20ce6903e18758cc6f38e8bff29bd023c7f06ec3db18c434439f0c7" exitCode=0 Feb 19 15:12:05 crc kubenswrapper[4810]: I0219 15:12:05.532721 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gp8sg" event={"ID":"3146bc9a-c4fc-4aa1-acae-032db4aa0582","Type":"ContainerDied","Data":"db4a068bb20ce6903e18758cc6f38e8bff29bd023c7f06ec3db18c434439f0c7"} Feb 19 15:12:05 crc kubenswrapper[4810]: I0219 15:12:05.540039 4810 generic.go:334] "Generic (PLEG): container finished" podID="ee54de34-1c90-401d-8102-2cc1e4116661" containerID="14425fa062e8aa4022f788165f140217518c2e7e2510e6d081b098762154d7a3" exitCode=0 Feb 19 15:12:05 crc kubenswrapper[4810]: I0219 15:12:05.540954 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v87ss" event={"ID":"ee54de34-1c90-401d-8102-2cc1e4116661","Type":"ContainerDied","Data":"14425fa062e8aa4022f788165f140217518c2e7e2510e6d081b098762154d7a3"} Feb 19 15:12:05 crc kubenswrapper[4810]: I0219 15:12:05.540979 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v87ss" event={"ID":"ee54de34-1c90-401d-8102-2cc1e4116661","Type":"ContainerStarted","Data":"e9e0d068483405f5413181ac3893890a639ed339e87615e73aeb8a60ceb19c52"} Feb 19 15:12:05 crc kubenswrapper[4810]: I0219 15:12:05.942884 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:12:05 crc kubenswrapper[4810]: I0219 15:12:05.954515 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hvw7f" Feb 19 15:12:06 crc kubenswrapper[4810]: I0219 15:12:06.234777 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 15:12:06 crc kubenswrapper[4810]: I0219 15:12:06.372462 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bf4faab-dab6-498d-bf73-7a740448c64b-kubelet-dir\") pod \"9bf4faab-dab6-498d-bf73-7a740448c64b\" (UID: \"9bf4faab-dab6-498d-bf73-7a740448c64b\") " Feb 19 15:12:06 crc kubenswrapper[4810]: I0219 15:12:06.372543 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bf4faab-dab6-498d-bf73-7a740448c64b-kube-api-access\") pod \"9bf4faab-dab6-498d-bf73-7a740448c64b\" (UID: \"9bf4faab-dab6-498d-bf73-7a740448c64b\") " Feb 19 15:12:06 crc kubenswrapper[4810]: I0219 15:12:06.372609 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf4faab-dab6-498d-bf73-7a740448c64b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9bf4faab-dab6-498d-bf73-7a740448c64b" (UID: "9bf4faab-dab6-498d-bf73-7a740448c64b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:12:06 crc kubenswrapper[4810]: I0219 15:12:06.372870 4810 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bf4faab-dab6-498d-bf73-7a740448c64b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:06 crc kubenswrapper[4810]: I0219 15:12:06.388534 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bf4faab-dab6-498d-bf73-7a740448c64b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9bf4faab-dab6-498d-bf73-7a740448c64b" (UID: "9bf4faab-dab6-498d-bf73-7a740448c64b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:12:06 crc kubenswrapper[4810]: I0219 15:12:06.474881 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bf4faab-dab6-498d-bf73-7a740448c64b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:06 crc kubenswrapper[4810]: I0219 15:12:06.562899 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9bf4faab-dab6-498d-bf73-7a740448c64b","Type":"ContainerDied","Data":"b81cbe17b0c85b33afea4e73bfe0c252ec94240f8fe691abb4c277e1104d95b3"} Feb 19 15:12:06 crc kubenswrapper[4810]: I0219 15:12:06.562986 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 15:12:06 crc kubenswrapper[4810]: I0219 15:12:06.562998 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b81cbe17b0c85b33afea4e73bfe0c252ec94240f8fe691abb4c277e1104d95b3" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.023707 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 15:12:07 crc kubenswrapper[4810]: E0219 15:12:07.024072 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bf4faab-dab6-498d-bf73-7a740448c64b" containerName="pruner" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.024094 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bf4faab-dab6-498d-bf73-7a740448c64b" containerName="pruner" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.024242 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bf4faab-dab6-498d-bf73-7a740448c64b" containerName="pruner" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.025202 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.027594 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.028069 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.041466 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.188354 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2dbc717-bc2f-458b-9fbd-0082602a0e6d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d2dbc717-bc2f-458b-9fbd-0082602a0e6d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.188497 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2dbc717-bc2f-458b-9fbd-0082602a0e6d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d2dbc717-bc2f-458b-9fbd-0082602a0e6d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.290279 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2dbc717-bc2f-458b-9fbd-0082602a0e6d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d2dbc717-bc2f-458b-9fbd-0082602a0e6d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.290455 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2dbc717-bc2f-458b-9fbd-0082602a0e6d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d2dbc717-bc2f-458b-9fbd-0082602a0e6d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.290565 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2dbc717-bc2f-458b-9fbd-0082602a0e6d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d2dbc717-bc2f-458b-9fbd-0082602a0e6d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.330518 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2dbc717-bc2f-458b-9fbd-0082602a0e6d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d2dbc717-bc2f-458b-9fbd-0082602a0e6d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.375795 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 15:12:07 crc kubenswrapper[4810]: I0219 15:12:07.830844 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 15:12:08 crc kubenswrapper[4810]: I0219 15:12:08.653476 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d2dbc717-bc2f-458b-9fbd-0082602a0e6d","Type":"ContainerStarted","Data":"779d684e378a82c58f3e61eadda27c4746a790326de1db0560d46b865a873e75"} Feb 19 15:12:09 crc kubenswrapper[4810]: I0219 15:12:09.413061 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zgcrd" Feb 19 15:12:09 crc kubenswrapper[4810]: I0219 15:12:09.686064 4810 generic.go:334] "Generic (PLEG): container finished" podID="d2dbc717-bc2f-458b-9fbd-0082602a0e6d" containerID="df11bfb2abf6b4d6666c47f84367d2443ac85c6c12fe6f9c1ef0751d1d9da3bb" exitCode=0 Feb 19 15:12:09 crc kubenswrapper[4810]: I0219 15:12:09.686138 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d2dbc717-bc2f-458b-9fbd-0082602a0e6d","Type":"ContainerDied","Data":"df11bfb2abf6b4d6666c47f84367d2443ac85c6c12fe6f9c1ef0751d1d9da3bb"} Feb 19 15:12:11 crc kubenswrapper[4810]: I0219 15:12:11.107042 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 15:12:11 crc kubenswrapper[4810]: I0219 15:12:11.194906 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2dbc717-bc2f-458b-9fbd-0082602a0e6d-kube-api-access\") pod \"d2dbc717-bc2f-458b-9fbd-0082602a0e6d\" (UID: \"d2dbc717-bc2f-458b-9fbd-0082602a0e6d\") " Feb 19 15:12:11 crc kubenswrapper[4810]: I0219 15:12:11.194976 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2dbc717-bc2f-458b-9fbd-0082602a0e6d-kubelet-dir\") pod \"d2dbc717-bc2f-458b-9fbd-0082602a0e6d\" (UID: \"d2dbc717-bc2f-458b-9fbd-0082602a0e6d\") " Feb 19 15:12:11 crc kubenswrapper[4810]: I0219 15:12:11.195384 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2dbc717-bc2f-458b-9fbd-0082602a0e6d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d2dbc717-bc2f-458b-9fbd-0082602a0e6d" (UID: "d2dbc717-bc2f-458b-9fbd-0082602a0e6d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:12:11 crc kubenswrapper[4810]: I0219 15:12:11.215621 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2dbc717-bc2f-458b-9fbd-0082602a0e6d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d2dbc717-bc2f-458b-9fbd-0082602a0e6d" (UID: "d2dbc717-bc2f-458b-9fbd-0082602a0e6d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:12:11 crc kubenswrapper[4810]: I0219 15:12:11.301372 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2dbc717-bc2f-458b-9fbd-0082602a0e6d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:11 crc kubenswrapper[4810]: I0219 15:12:11.301420 4810 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2dbc717-bc2f-458b-9fbd-0082602a0e6d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:11 crc kubenswrapper[4810]: I0219 15:12:11.731824 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d2dbc717-bc2f-458b-9fbd-0082602a0e6d","Type":"ContainerDied","Data":"779d684e378a82c58f3e61eadda27c4746a790326de1db0560d46b865a873e75"} Feb 19 15:12:11 crc kubenswrapper[4810]: I0219 15:12:11.732230 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="779d684e378a82c58f3e61eadda27c4746a790326de1db0560d46b865a873e75" Feb 19 15:12:11 crc kubenswrapper[4810]: I0219 15:12:11.731890 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 15:12:13 crc kubenswrapper[4810]: I0219 15:12:13.586465 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:12:13 crc kubenswrapper[4810]: I0219 15:12:13.586985 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:12:13 crc kubenswrapper[4810]: I0219 15:12:13.586465 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:12:13 crc kubenswrapper[4810]: I0219 15:12:13.587147 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:12:13 crc kubenswrapper[4810]: I0219 15:12:13.650299 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:12:13 crc kubenswrapper[4810]: I0219 15:12:13.659696 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:12:15 crc kubenswrapper[4810]: I0219 15:12:15.573801 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:12:15 crc kubenswrapper[4810]: I0219 15:12:15.596472 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b72d3f7a-e418-4a21-af73-6a43ce3358c1-metrics-certs\") pod \"network-metrics-daemon-2x9v9\" (UID: \"b72d3f7a-e418-4a21-af73-6a43ce3358c1\") " pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:12:15 crc kubenswrapper[4810]: I0219 15:12:15.764810 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2x9v9" Feb 19 15:12:16 crc kubenswrapper[4810]: I0219 15:12:16.864689 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sl5p9"] Feb 19 15:12:16 crc kubenswrapper[4810]: I0219 15:12:16.865854 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" podUID="6f0a78b8-77b3-47dd-9dbe-e578b0374cf6" containerName="controller-manager" containerID="cri-o://dc986c76de8632d646d80420e0f140b9a738f5647ff326454ca42eca27431ca5" gracePeriod=30 Feb 19 15:12:16 crc kubenswrapper[4810]: I0219 15:12:16.903947 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth"] Feb 19 15:12:16 crc kubenswrapper[4810]: I0219 15:12:16.906658 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" podUID="ad43df2c-4944-45e2-919f-0c297f4092d4" containerName="route-controller-manager" containerID="cri-o://b43634a16e9cdfb66105b0eff470452a3406e103e0c90e02477b5f9de0072e03" gracePeriod=30 Feb 19 15:12:18 crc kubenswrapper[4810]: I0219 15:12:18.813173 4810 generic.go:334] "Generic (PLEG): container finished" podID="6f0a78b8-77b3-47dd-9dbe-e578b0374cf6" containerID="dc986c76de8632d646d80420e0f140b9a738f5647ff326454ca42eca27431ca5" exitCode=0 Feb 19 15:12:18 crc kubenswrapper[4810]: I0219 15:12:18.813300 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" event={"ID":"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6","Type":"ContainerDied","Data":"dc986c76de8632d646d80420e0f140b9a738f5647ff326454ca42eca27431ca5"} Feb 19 15:12:19 crc kubenswrapper[4810]: I0219 15:12:19.537787 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:12:19 crc kubenswrapper[4810]: I0219 15:12:19.537881 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.358672 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.547249 4810 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-n8zth container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.547771 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" podUID="ad43df2c-4944-45e2-919f-0c297f4092d4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.587541 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.587605 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.587662 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.587686 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.587774 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-dkppn" Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.588566 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.588655 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.589008 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"248f3396a274a7c02cca57cafc68effaca4124e54f5008022c7525d6c545f088"} pod="openshift-console/downloads-7954f5f757-dkppn" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.589225 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" containerID="cri-o://248f3396a274a7c02cca57cafc68effaca4124e54f5008022c7525d6c545f088" gracePeriod=2 Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.646756 4810 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-sl5p9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 19 15:12:23 crc kubenswrapper[4810]: I0219 15:12:23.646833 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" podUID="6f0a78b8-77b3-47dd-9dbe-e578b0374cf6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 19 15:12:28 crc kubenswrapper[4810]: I0219 15:12:28.896963 4810 generic.go:334] "Generic (PLEG): container finished" podID="ad43df2c-4944-45e2-919f-0c297f4092d4" containerID="b43634a16e9cdfb66105b0eff470452a3406e103e0c90e02477b5f9de0072e03" exitCode=0 Feb 19 15:12:28 crc kubenswrapper[4810]: I0219 15:12:28.897193 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" event={"ID":"ad43df2c-4944-45e2-919f-0c297f4092d4","Type":"ContainerDied","Data":"b43634a16e9cdfb66105b0eff470452a3406e103e0c90e02477b5f9de0072e03"} Feb 19 15:12:29 crc kubenswrapper[4810]: I0219 15:12:29.908381 4810 generic.go:334] "Generic (PLEG): container finished" podID="7a29951a-027e-49b4-a7ea-a8e363942414" containerID="248f3396a274a7c02cca57cafc68effaca4124e54f5008022c7525d6c545f088" exitCode=0 Feb 19 15:12:29 crc kubenswrapper[4810]: I0219 15:12:29.908710 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dkppn" event={"ID":"7a29951a-027e-49b4-a7ea-a8e363942414","Type":"ContainerDied","Data":"248f3396a274a7c02cca57cafc68effaca4124e54f5008022c7525d6c545f088"} Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.704262 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.716035 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.744903 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl"] Feb 19 15:12:31 crc kubenswrapper[4810]: E0219 15:12:31.745283 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad43df2c-4944-45e2-919f-0c297f4092d4" containerName="route-controller-manager" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.745305 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad43df2c-4944-45e2-919f-0c297f4092d4" containerName="route-controller-manager" Feb 19 15:12:31 crc kubenswrapper[4810]: E0219 15:12:31.745364 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0a78b8-77b3-47dd-9dbe-e578b0374cf6" containerName="controller-manager" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.745379 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0a78b8-77b3-47dd-9dbe-e578b0374cf6" containerName="controller-manager" Feb 19 15:12:31 crc kubenswrapper[4810]: E0219 15:12:31.745401 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2dbc717-bc2f-458b-9fbd-0082602a0e6d" containerName="pruner" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.745415 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2dbc717-bc2f-458b-9fbd-0082602a0e6d" containerName="pruner" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.745606 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2dbc717-bc2f-458b-9fbd-0082602a0e6d" containerName="pruner" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.745630 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad43df2c-4944-45e2-919f-0c297f4092d4" containerName="route-controller-manager" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.745655 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0a78b8-77b3-47dd-9dbe-e578b0374cf6" containerName="controller-manager" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.746371 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.747722 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l478x\" (UniqueName: \"kubernetes.io/projected/ad43df2c-4944-45e2-919f-0c297f4092d4-kube-api-access-l478x\") pod \"ad43df2c-4944-45e2-919f-0c297f4092d4\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.747778 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad43df2c-4944-45e2-919f-0c297f4092d4-config\") pod \"ad43df2c-4944-45e2-919f-0c297f4092d4\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.747957 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-proxy-ca-bundles\") pod \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.748114 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-serving-cert\") pod \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.748194 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad43df2c-4944-45e2-919f-0c297f4092d4-serving-cert\") pod \"ad43df2c-4944-45e2-919f-0c297f4092d4\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.748234 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-client-ca\") pod \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.748300 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k7k5\" (UniqueName: \"kubernetes.io/projected/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-kube-api-access-8k7k5\") pod \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.748371 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad43df2c-4944-45e2-919f-0c297f4092d4-client-ca\") pod \"ad43df2c-4944-45e2-919f-0c297f4092d4\" (UID: \"ad43df2c-4944-45e2-919f-0c297f4092d4\") " Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.748462 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-config\") pod \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\" (UID: \"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6\") " Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.750036 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-config" (OuterVolumeSpecName: "config") pod "6f0a78b8-77b3-47dd-9dbe-e578b0374cf6" (UID: "6f0a78b8-77b3-47dd-9dbe-e578b0374cf6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.756516 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6f0a78b8-77b3-47dd-9dbe-e578b0374cf6" (UID: "6f0a78b8-77b3-47dd-9dbe-e578b0374cf6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.757154 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad43df2c-4944-45e2-919f-0c297f4092d4-config" (OuterVolumeSpecName: "config") pod "ad43df2c-4944-45e2-919f-0c297f4092d4" (UID: "ad43df2c-4944-45e2-919f-0c297f4092d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.758103 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad43df2c-4944-45e2-919f-0c297f4092d4-client-ca" (OuterVolumeSpecName: "client-ca") pod "ad43df2c-4944-45e2-919f-0c297f4092d4" (UID: "ad43df2c-4944-45e2-919f-0c297f4092d4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.758133 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-client-ca" (OuterVolumeSpecName: "client-ca") pod "6f0a78b8-77b3-47dd-9dbe-e578b0374cf6" (UID: "6f0a78b8-77b3-47dd-9dbe-e578b0374cf6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.759348 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl"] Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.760920 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad43df2c-4944-45e2-919f-0c297f4092d4-kube-api-access-l478x" (OuterVolumeSpecName: "kube-api-access-l478x") pod "ad43df2c-4944-45e2-919f-0c297f4092d4" (UID: "ad43df2c-4944-45e2-919f-0c297f4092d4"). InnerVolumeSpecName "kube-api-access-l478x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.762045 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad43df2c-4944-45e2-919f-0c297f4092d4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ad43df2c-4944-45e2-919f-0c297f4092d4" (UID: "ad43df2c-4944-45e2-919f-0c297f4092d4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.779357 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6f0a78b8-77b3-47dd-9dbe-e578b0374cf6" (UID: "6f0a78b8-77b3-47dd-9dbe-e578b0374cf6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.786160 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-kube-api-access-8k7k5" (OuterVolumeSpecName: "kube-api-access-8k7k5") pod "6f0a78b8-77b3-47dd-9dbe-e578b0374cf6" (UID: "6f0a78b8-77b3-47dd-9dbe-e578b0374cf6"). InnerVolumeSpecName "kube-api-access-8k7k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.849704 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a9911b7-9775-45a3-8eba-2418c6a8c7da-client-ca\") pod \"route-controller-manager-58bf9f9fdf-sfcvl\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.849772 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvn9b\" (UniqueName: \"kubernetes.io/projected/3a9911b7-9775-45a3-8eba-2418c6a8c7da-kube-api-access-dvn9b\") pod \"route-controller-manager-58bf9f9fdf-sfcvl\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.849820 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a9911b7-9775-45a3-8eba-2418c6a8c7da-serving-cert\") pod \"route-controller-manager-58bf9f9fdf-sfcvl\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.849910 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a9911b7-9775-45a3-8eba-2418c6a8c7da-config\") pod \"route-controller-manager-58bf9f9fdf-sfcvl\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.849957 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l478x\" (UniqueName: \"kubernetes.io/projected/ad43df2c-4944-45e2-919f-0c297f4092d4-kube-api-access-l478x\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.849969 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad43df2c-4944-45e2-919f-0c297f4092d4-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.849980 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.849989 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.850030 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad43df2c-4944-45e2-919f-0c297f4092d4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.850039 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.850048 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k7k5\" (UniqueName: \"kubernetes.io/projected/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-kube-api-access-8k7k5\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.850056 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad43df2c-4944-45e2-919f-0c297f4092d4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.850064 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.924417 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" event={"ID":"6f0a78b8-77b3-47dd-9dbe-e578b0374cf6","Type":"ContainerDied","Data":"918504ecdc8f526f00360e8dd63cb3df985da70843c55db6ce3d06ae23d89251"} Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.924460 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sl5p9" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.924501 4810 scope.go:117] "RemoveContainer" containerID="dc986c76de8632d646d80420e0f140b9a738f5647ff326454ca42eca27431ca5" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.930154 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" event={"ID":"ad43df2c-4944-45e2-919f-0c297f4092d4","Type":"ContainerDied","Data":"8973ecaad02b7aeec4f31a47d961686dc669236fd8c026776f4e494af608cf1b"} Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.930313 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.950857 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvn9b\" (UniqueName: \"kubernetes.io/projected/3a9911b7-9775-45a3-8eba-2418c6a8c7da-kube-api-access-dvn9b\") pod \"route-controller-manager-58bf9f9fdf-sfcvl\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.950915 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a9911b7-9775-45a3-8eba-2418c6a8c7da-serving-cert\") pod \"route-controller-manager-58bf9f9fdf-sfcvl\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.950981 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a9911b7-9775-45a3-8eba-2418c6a8c7da-config\") pod \"route-controller-manager-58bf9f9fdf-sfcvl\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.951029 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a9911b7-9775-45a3-8eba-2418c6a8c7da-client-ca\") pod \"route-controller-manager-58bf9f9fdf-sfcvl\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.952116 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a9911b7-9775-45a3-8eba-2418c6a8c7da-client-ca\") pod \"route-controller-manager-58bf9f9fdf-sfcvl\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.952367 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a9911b7-9775-45a3-8eba-2418c6a8c7da-config\") pod \"route-controller-manager-58bf9f9fdf-sfcvl\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.961179 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a9911b7-9775-45a3-8eba-2418c6a8c7da-serving-cert\") pod \"route-controller-manager-58bf9f9fdf-sfcvl\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.961830 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sl5p9"] Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.972577 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sl5p9"] Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.972853 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvn9b\" (UniqueName: \"kubernetes.io/projected/3a9911b7-9775-45a3-8eba-2418c6a8c7da-kube-api-access-dvn9b\") pod \"route-controller-manager-58bf9f9fdf-sfcvl\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.976432 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth"] Feb 19 15:12:31 crc kubenswrapper[4810]: I0219 15:12:31.979516 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n8zth"] Feb 19 15:12:32 crc kubenswrapper[4810]: I0219 15:12:32.112545 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.447634 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f0a78b8-77b3-47dd-9dbe-e578b0374cf6" path="/var/lib/kubelet/pods/6f0a78b8-77b3-47dd-9dbe-e578b0374cf6/volumes" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.448615 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad43df2c-4944-45e2-919f-0c297f4092d4" path="/var/lib/kubelet/pods/ad43df2c-4944-45e2-919f-0c297f4092d4/volumes" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.586341 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.586399 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.815540 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bf8444975-mbpl8"] Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.816594 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.820878 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.821148 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.821501 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.821853 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.823978 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.825659 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.834271 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bf8444975-mbpl8"] Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.841715 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.874907 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptwvr\" (UniqueName: \"kubernetes.io/projected/f3507810-2d89-4c4c-bb78-f824d27a6b33-kube-api-access-ptwvr\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.874983 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-proxy-ca-bundles\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.875064 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-config\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.875116 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3507810-2d89-4c4c-bb78-f824d27a6b33-serving-cert\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.875150 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-client-ca\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.919257 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2nn8b" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.976084 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-config\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.976158 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3507810-2d89-4c4c-bb78-f824d27a6b33-serving-cert\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.976185 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-client-ca\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.976254 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptwvr\" (UniqueName: \"kubernetes.io/projected/f3507810-2d89-4c4c-bb78-f824d27a6b33-kube-api-access-ptwvr\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.976281 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-proxy-ca-bundles\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.978614 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-proxy-ca-bundles\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.978720 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-config\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.979299 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-client-ca\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.985775 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3507810-2d89-4c4c-bb78-f824d27a6b33-serving-cert\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:33 crc kubenswrapper[4810]: I0219 15:12:33.998076 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptwvr\" (UniqueName: \"kubernetes.io/projected/f3507810-2d89-4c4c-bb78-f824d27a6b33-kube-api-access-ptwvr\") pod \"controller-manager-bf8444975-mbpl8\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:34 crc kubenswrapper[4810]: I0219 15:12:34.154531 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:36 crc kubenswrapper[4810]: I0219 15:12:36.843770 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bf8444975-mbpl8"] Feb 19 15:12:36 crc kubenswrapper[4810]: I0219 15:12:36.935364 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl"] Feb 19 15:12:39 crc kubenswrapper[4810]: I0219 15:12:39.532408 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 15:12:39 crc kubenswrapper[4810]: E0219 15:12:39.648591 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 15:12:39 crc kubenswrapper[4810]: E0219 15:12:39.648783 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqz6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-d5ks5_openshift-marketplace(9a3d6b1f-2011-4f7f-bea0-1d303007fe41): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 15:12:39 crc kubenswrapper[4810]: E0219 15:12:39.649957 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-d5ks5" podUID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" Feb 19 15:12:39 crc kubenswrapper[4810]: E0219 15:12:39.681869 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 15:12:39 crc kubenswrapper[4810]: E0219 15:12:39.682083 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xzxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-x8sn2_openshift-marketplace(cc8ce195-1fe1-4684-8172-e710b3552fb5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 15:12:39 crc kubenswrapper[4810]: E0219 15:12:39.683809 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-x8sn2" podUID="cc8ce195-1fe1-4684-8172-e710b3552fb5" Feb 19 15:12:41 crc kubenswrapper[4810]: E0219 15:12:40.999654 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-x8sn2" podUID="cc8ce195-1fe1-4684-8172-e710b3552fb5" Feb 19 15:12:41 crc kubenswrapper[4810]: E0219 15:12:41.000754 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-d5ks5" podUID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" Feb 19 15:12:41 crc kubenswrapper[4810]: I0219 15:12:41.019585 4810 scope.go:117] "RemoveContainer" containerID="b43634a16e9cdfb66105b0eff470452a3406e103e0c90e02477b5f9de0072e03" Feb 19 15:12:41 crc kubenswrapper[4810]: E0219 15:12:41.062647 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 15:12:41 crc kubenswrapper[4810]: E0219 15:12:41.063141 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4nld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dfqg8_openshift-marketplace(5c654206-f2d0-4b40-9df0-577dbf27e5e4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 15:12:41 crc kubenswrapper[4810]: E0219 15:12:41.064384 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dfqg8" podUID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" Feb 19 15:12:41 crc kubenswrapper[4810]: E0219 15:12:41.148465 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 15:12:41 crc kubenswrapper[4810]: E0219 15:12:41.148661 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbxf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ptbh9_openshift-marketplace(7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 15:12:41 crc kubenswrapper[4810]: E0219 15:12:41.149899 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ptbh9" podUID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" Feb 19 15:12:41 crc kubenswrapper[4810]: E0219 15:12:41.211573 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 15:12:41 crc kubenswrapper[4810]: E0219 15:12:41.211712 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-thvr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-blpmq_openshift-marketplace(4127fef2-ef2b-4cc4-967d-d52dac26f314): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 15:12:41 crc kubenswrapper[4810]: E0219 15:12:41.217186 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-blpmq" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" Feb 19 15:12:41 crc kubenswrapper[4810]: I0219 15:12:41.405281 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bf8444975-mbpl8"] Feb 19 15:12:41 crc kubenswrapper[4810]: I0219 15:12:41.412300 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2x9v9"] Feb 19 15:12:41 crc kubenswrapper[4810]: W0219 15:12:41.430873 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb72d3f7a_e418_4a21_af73_6a43ce3358c1.slice/crio-c7be3375c01757ff909a664a4f5366321aa5a0fc9a5b4c30ad042d60df9dfd32 WatchSource:0}: Error finding container c7be3375c01757ff909a664a4f5366321aa5a0fc9a5b4c30ad042d60df9dfd32: Status 404 returned error can't find the container with id c7be3375c01757ff909a664a4f5366321aa5a0fc9a5b4c30ad042d60df9dfd32 Feb 19 15:12:41 crc kubenswrapper[4810]: I0219 15:12:41.539137 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl"] Feb 19 15:12:41 crc kubenswrapper[4810]: W0219 15:12:41.547252 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a9911b7_9775_45a3_8eba_2418c6a8c7da.slice/crio-ec358f68ac8aafcdf69584f6ce0aea9148f028a0a525c1f72e78fe0953401995 WatchSource:0}: Error finding container ec358f68ac8aafcdf69584f6ce0aea9148f028a0a525c1f72e78fe0953401995: Status 404 returned error can't find the container with id ec358f68ac8aafcdf69584f6ce0aea9148f028a0a525c1f72e78fe0953401995 Feb 19 15:12:41 crc kubenswrapper[4810]: I0219 15:12:41.796993 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 15:12:41 crc kubenswrapper[4810]: I0219 15:12:41.798104 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 15:12:41 crc kubenswrapper[4810]: I0219 15:12:41.800992 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 15:12:41 crc kubenswrapper[4810]: I0219 15:12:41.801077 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 15:12:41 crc kubenswrapper[4810]: I0219 15:12:41.810932 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 15:12:41 crc kubenswrapper[4810]: I0219 15:12:41.920955 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ed7553-c229-4444-9e9d-53a16d271385-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b0ed7553-c229-4444-9e9d-53a16d271385\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 15:12:41 crc kubenswrapper[4810]: I0219 15:12:41.921043 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ed7553-c229-4444-9e9d-53a16d271385-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b0ed7553-c229-4444-9e9d-53a16d271385\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.016770 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" event={"ID":"3a9911b7-9775-45a3-8eba-2418c6a8c7da","Type":"ContainerStarted","Data":"05732258b05d3841e954e41368cf1974cbd08c6655bd488dd5b6669cff2eff10"} Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.016821 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" event={"ID":"3a9911b7-9775-45a3-8eba-2418c6a8c7da","Type":"ContainerStarted","Data":"ec358f68ac8aafcdf69584f6ce0aea9148f028a0a525c1f72e78fe0953401995"} Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.019204 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" event={"ID":"f3507810-2d89-4c4c-bb78-f824d27a6b33","Type":"ContainerStarted","Data":"a9f9e44ebaf4a998457e0db44c51717fbe64d4e0dd456c466ed47a6c88be4c47"} Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.019272 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" event={"ID":"f3507810-2d89-4c4c-bb78-f824d27a6b33","Type":"ContainerStarted","Data":"ae4ec9dd615a7b98a2515fa91901e518b04ab1ddd1e2dc3384fefaeb39a9e580"} Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.019643 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" podUID="f3507810-2d89-4c4c-bb78-f824d27a6b33" containerName="controller-manager" containerID="cri-o://a9f9e44ebaf4a998457e0db44c51717fbe64d4e0dd456c466ed47a6c88be4c47" gracePeriod=30 Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.019918 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.021335 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" event={"ID":"b72d3f7a-e418-4a21-af73-6a43ce3358c1","Type":"ContainerStarted","Data":"c7be3375c01757ff909a664a4f5366321aa5a0fc9a5b4c30ad042d60df9dfd32"} Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.021799 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ed7553-c229-4444-9e9d-53a16d271385-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b0ed7553-c229-4444-9e9d-53a16d271385\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.021856 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ed7553-c229-4444-9e9d-53a16d271385-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b0ed7553-c229-4444-9e9d-53a16d271385\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.022195 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ed7553-c229-4444-9e9d-53a16d271385-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b0ed7553-c229-4444-9e9d-53a16d271385\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.024442 4810 generic.go:334] "Generic (PLEG): container finished" podID="3bf3315d-3d2f-4aeb-b925-c3832e102e85" containerID="2338631bd769d72dd75e6311ec37172a35cf219ae1ae08bee9e394a35d599110" exitCode=0 Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.024512 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4vw" event={"ID":"3bf3315d-3d2f-4aeb-b925-c3832e102e85","Type":"ContainerDied","Data":"2338631bd769d72dd75e6311ec37172a35cf219ae1ae08bee9e394a35d599110"} Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.030608 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gp8sg" event={"ID":"3146bc9a-c4fc-4aa1-acae-032db4aa0582","Type":"ContainerStarted","Data":"678210fe6fd1c3abf47690dcbbfba0fc503dc30b786177c25e50c7cce621be3d"} Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.038286 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v87ss" event={"ID":"ee54de34-1c90-401d-8102-2cc1e4116661","Type":"ContainerStarted","Data":"d8d3ab6d086deece4cdefec9736b01ba8f86997b96e710debe90df076af13fd1"} Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.040365 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dkppn" event={"ID":"7a29951a-027e-49b4-a7ea-a8e363942414","Type":"ContainerStarted","Data":"a924c306a0172ac4f69e59d668d26991f48bbb4e8ddce89d336a0e0fb6dfb9e5"} Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.041371 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dkppn" Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.041455 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.041508 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.052594 4810 patch_prober.go:28] interesting pod/controller-manager-bf8444975-mbpl8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:59546->10.217.0.55:8443: read: connection reset by peer" start-of-body= Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.052679 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" podUID="f3507810-2d89-4c4c-bb78-f824d27a6b33" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:59546->10.217.0.55:8443: read: connection reset by peer" Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.054225 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ed7553-c229-4444-9e9d-53a16d271385-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b0ed7553-c229-4444-9e9d-53a16d271385\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 15:12:42 crc kubenswrapper[4810]: E0219 15:12:42.056853 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ptbh9" podUID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" Feb 19 15:12:42 crc kubenswrapper[4810]: E0219 15:12:42.056926 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dfqg8" podUID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" Feb 19 15:12:42 crc kubenswrapper[4810]: E0219 15:12:42.058204 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-blpmq" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.072694 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" podStartSLOduration=26.072673906 podStartE2EDuration="26.072673906s" podCreationTimestamp="2026-02-19 15:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:12:42.054110575 +0000 UTC m=+191.536140699" watchObservedRunningTime="2026-02-19 15:12:42.072673906 +0000 UTC m=+191.554704030" Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.121760 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 15:12:42 crc kubenswrapper[4810]: I0219 15:12:42.341021 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.051450 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.061767 4810 generic.go:334] "Generic (PLEG): container finished" podID="3146bc9a-c4fc-4aa1-acae-032db4aa0582" containerID="678210fe6fd1c3abf47690dcbbfba0fc503dc30b786177c25e50c7cce621be3d" exitCode=0 Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.061853 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gp8sg" event={"ID":"3146bc9a-c4fc-4aa1-acae-032db4aa0582","Type":"ContainerDied","Data":"678210fe6fd1c3abf47690dcbbfba0fc503dc30b786177c25e50c7cce621be3d"} Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.063237 4810 generic.go:334] "Generic (PLEG): container finished" podID="f3507810-2d89-4c4c-bb78-f824d27a6b33" containerID="a9f9e44ebaf4a998457e0db44c51717fbe64d4e0dd456c466ed47a6c88be4c47" exitCode=0 Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.063303 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.063349 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" event={"ID":"f3507810-2d89-4c4c-bb78-f824d27a6b33","Type":"ContainerDied","Data":"a9f9e44ebaf4a998457e0db44c51717fbe64d4e0dd456c466ed47a6c88be4c47"} Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.063402 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf8444975-mbpl8" event={"ID":"f3507810-2d89-4c4c-bb78-f824d27a6b33","Type":"ContainerDied","Data":"ae4ec9dd615a7b98a2515fa91901e518b04ab1ddd1e2dc3384fefaeb39a9e580"} Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.063431 4810 scope.go:117] "RemoveContainer" containerID="a9f9e44ebaf4a998457e0db44c51717fbe64d4e0dd456c466ed47a6c88be4c47" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.065181 4810 generic.go:334] "Generic (PLEG): container finished" podID="ee54de34-1c90-401d-8102-2cc1e4116661" containerID="d8d3ab6d086deece4cdefec9736b01ba8f86997b96e710debe90df076af13fd1" exitCode=0 Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.065210 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v87ss" event={"ID":"ee54de34-1c90-401d-8102-2cc1e4116661","Type":"ContainerDied","Data":"d8d3ab6d086deece4cdefec9736b01ba8f86997b96e710debe90df076af13fd1"} Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.067653 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" event={"ID":"b72d3f7a-e418-4a21-af73-6a43ce3358c1","Type":"ContainerStarted","Data":"d44f15231af8cb21f7ccee9cd51d7426e3bca1e6b8b1f3e479c05ee5769791bb"} Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.067689 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2x9v9" event={"ID":"b72d3f7a-e418-4a21-af73-6a43ce3358c1","Type":"ContainerStarted","Data":"938fef66a55244a280a168ee18a5bb33fcb5ba9de655a49d6c5ffcf64eef4843"} Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.086229 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b0ed7553-c229-4444-9e9d-53a16d271385","Type":"ContainerStarted","Data":"cbf5f2fe685e0232708247bcf761dd389dbfe07942605aa730596520aea1f4e5"} Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.086272 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b0ed7553-c229-4444-9e9d-53a16d271385","Type":"ContainerStarted","Data":"ca55f8aae7ec39babf30ac9cbecf6ad25f43591f6457b091b32a87ba31a0bf92"} Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.086386 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" podUID="3a9911b7-9775-45a3-8eba-2418c6a8c7da" containerName="route-controller-manager" containerID="cri-o://05732258b05d3841e954e41368cf1974cbd08c6655bd488dd5b6669cff2eff10" gracePeriod=30 Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.086814 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.086847 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.086881 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.094625 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.096090 4810 scope.go:117] "RemoveContainer" containerID="a9f9e44ebaf4a998457e0db44c51717fbe64d4e0dd456c466ed47a6c88be4c47" Feb 19 15:12:43 crc kubenswrapper[4810]: E0219 15:12:43.096607 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9f9e44ebaf4a998457e0db44c51717fbe64d4e0dd456c466ed47a6c88be4c47\": container with ID starting with a9f9e44ebaf4a998457e0db44c51717fbe64d4e0dd456c466ed47a6c88be4c47 not found: ID does not exist" containerID="a9f9e44ebaf4a998457e0db44c51717fbe64d4e0dd456c466ed47a6c88be4c47" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.096648 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9f9e44ebaf4a998457e0db44c51717fbe64d4e0dd456c466ed47a6c88be4c47"} err="failed to get container status \"a9f9e44ebaf4a998457e0db44c51717fbe64d4e0dd456c466ed47a6c88be4c47\": rpc error: code = NotFound desc = could not find container \"a9f9e44ebaf4a998457e0db44c51717fbe64d4e0dd456c466ed47a6c88be4c47\": container with ID starting with a9f9e44ebaf4a998457e0db44c51717fbe64d4e0dd456c466ed47a6c88be4c47 not found: ID does not exist" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.106749 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b7784947b-gg4f2"] Feb 19 15:12:43 crc kubenswrapper[4810]: E0219 15:12:43.107076 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3507810-2d89-4c4c-bb78-f824d27a6b33" containerName="controller-manager" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.107096 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3507810-2d89-4c4c-bb78-f824d27a6b33" containerName="controller-manager" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.107243 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3507810-2d89-4c4c-bb78-f824d27a6b33" containerName="controller-manager" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.107674 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.110084 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b7784947b-gg4f2"] Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.152167 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" podStartSLOduration=27.152150362 podStartE2EDuration="27.152150362s" podCreationTimestamp="2026-02-19 15:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:12:43.148297096 +0000 UTC m=+192.630327220" watchObservedRunningTime="2026-02-19 15:12:43.152150362 +0000 UTC m=+192.634180486" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.237172 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-config\") pod \"f3507810-2d89-4c4c-bb78-f824d27a6b33\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.237233 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-proxy-ca-bundles\") pod \"f3507810-2d89-4c4c-bb78-f824d27a6b33\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.237272 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-client-ca\") pod \"f3507810-2d89-4c4c-bb78-f824d27a6b33\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.237346 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3507810-2d89-4c4c-bb78-f824d27a6b33-serving-cert\") pod \"f3507810-2d89-4c4c-bb78-f824d27a6b33\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.237406 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptwvr\" (UniqueName: \"kubernetes.io/projected/f3507810-2d89-4c4c-bb78-f824d27a6b33-kube-api-access-ptwvr\") pod \"f3507810-2d89-4c4c-bb78-f824d27a6b33\" (UID: \"f3507810-2d89-4c4c-bb78-f824d27a6b33\") " Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.237586 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-config\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.237630 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-client-ca\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.237753 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-proxy-ca-bundles\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.237780 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpstf\" (UniqueName: \"kubernetes.io/projected/0b5a4ac5-f403-4492-93a9-eca271fc69cf-kube-api-access-zpstf\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.237824 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5a4ac5-f403-4492-93a9-eca271fc69cf-serving-cert\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.238966 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-config" (OuterVolumeSpecName: "config") pod "f3507810-2d89-4c4c-bb78-f824d27a6b33" (UID: "f3507810-2d89-4c4c-bb78-f824d27a6b33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.240265 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-client-ca" (OuterVolumeSpecName: "client-ca") pod "f3507810-2d89-4c4c-bb78-f824d27a6b33" (UID: "f3507810-2d89-4c4c-bb78-f824d27a6b33"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.240998 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f3507810-2d89-4c4c-bb78-f824d27a6b33" (UID: "f3507810-2d89-4c4c-bb78-f824d27a6b33"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.248117 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3507810-2d89-4c4c-bb78-f824d27a6b33-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f3507810-2d89-4c4c-bb78-f824d27a6b33" (UID: "f3507810-2d89-4c4c-bb78-f824d27a6b33"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.248976 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3507810-2d89-4c4c-bb78-f824d27a6b33-kube-api-access-ptwvr" (OuterVolumeSpecName: "kube-api-access-ptwvr") pod "f3507810-2d89-4c4c-bb78-f824d27a6b33" (UID: "f3507810-2d89-4c4c-bb78-f824d27a6b33"). InnerVolumeSpecName "kube-api-access-ptwvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.339567 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-config\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.339638 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-client-ca\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.339706 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-proxy-ca-bundles\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.339736 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpstf\" (UniqueName: \"kubernetes.io/projected/0b5a4ac5-f403-4492-93a9-eca271fc69cf-kube-api-access-zpstf\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.339775 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5a4ac5-f403-4492-93a9-eca271fc69cf-serving-cert\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.339823 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3507810-2d89-4c4c-bb78-f824d27a6b33-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.339839 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptwvr\" (UniqueName: \"kubernetes.io/projected/f3507810-2d89-4c4c-bb78-f824d27a6b33-kube-api-access-ptwvr\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.339851 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.339860 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.339870 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3507810-2d89-4c4c-bb78-f824d27a6b33-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.341715 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-proxy-ca-bundles\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.341837 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-client-ca\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.343481 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5a4ac5-f403-4492-93a9-eca271fc69cf-serving-cert\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.350621 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-config\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.357804 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpstf\" (UniqueName: \"kubernetes.io/projected/0b5a4ac5-f403-4492-93a9-eca271fc69cf-kube-api-access-zpstf\") pod \"controller-manager-5b7784947b-gg4f2\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.399584 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bf8444975-mbpl8"] Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.402731 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bf8444975-mbpl8"] Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.425353 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.464473 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3507810-2d89-4c4c-bb78-f824d27a6b33" path="/var/lib/kubelet/pods/f3507810-2d89-4c4c-bb78-f824d27a6b33/volumes" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.585811 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.585877 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.586034 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.586097 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:12:43 crc kubenswrapper[4810]: I0219 15:12:43.655591 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b7784947b-gg4f2"] Feb 19 15:12:43 crc kubenswrapper[4810]: W0219 15:12:43.665849 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b5a4ac5_f403_4492_93a9_eca271fc69cf.slice/crio-f33a645c1b493c26e6d3ed1478c77f60dcd1bd0b429730e5b33e0b76bf136c78 WatchSource:0}: Error finding container f33a645c1b493c26e6d3ed1478c77f60dcd1bd0b429730e5b33e0b76bf136c78: Status 404 returned error can't find the container with id f33a645c1b493c26e6d3ed1478c77f60dcd1bd0b429730e5b33e0b76bf136c78 Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.097699 4810 generic.go:334] "Generic (PLEG): container finished" podID="3a9911b7-9775-45a3-8eba-2418c6a8c7da" containerID="05732258b05d3841e954e41368cf1974cbd08c6655bd488dd5b6669cff2eff10" exitCode=0 Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.097826 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" event={"ID":"3a9911b7-9775-45a3-8eba-2418c6a8c7da","Type":"ContainerDied","Data":"05732258b05d3841e954e41368cf1974cbd08c6655bd488dd5b6669cff2eff10"} Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.101665 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" event={"ID":"0b5a4ac5-f403-4492-93a9-eca271fc69cf","Type":"ContainerStarted","Data":"f33a645c1b493c26e6d3ed1478c77f60dcd1bd0b429730e5b33e0b76bf136c78"} Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.103918 4810 generic.go:334] "Generic (PLEG): container finished" podID="b0ed7553-c229-4444-9e9d-53a16d271385" containerID="cbf5f2fe685e0232708247bcf761dd389dbfe07942605aa730596520aea1f4e5" exitCode=0 Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.103966 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b0ed7553-c229-4444-9e9d-53a16d271385","Type":"ContainerDied","Data":"cbf5f2fe685e0232708247bcf761dd389dbfe07942605aa730596520aea1f4e5"} Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.105075 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-dkppn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.105372 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dkppn" podUID="7a29951a-027e-49b4-a7ea-a8e363942414" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.122796 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2x9v9" podStartSLOduration=171.122775384 podStartE2EDuration="2m51.122775384s" podCreationTimestamp="2026-02-19 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:12:44.119207056 +0000 UTC m=+193.601237200" watchObservedRunningTime="2026-02-19 15:12:44.122775384 +0000 UTC m=+193.604805518" Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.323585 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.455550 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a9911b7-9775-45a3-8eba-2418c6a8c7da-serving-cert\") pod \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.455621 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a9911b7-9775-45a3-8eba-2418c6a8c7da-config\") pod \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.455653 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvn9b\" (UniqueName: \"kubernetes.io/projected/3a9911b7-9775-45a3-8eba-2418c6a8c7da-kube-api-access-dvn9b\") pod \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.455729 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a9911b7-9775-45a3-8eba-2418c6a8c7da-client-ca\") pod \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\" (UID: \"3a9911b7-9775-45a3-8eba-2418c6a8c7da\") " Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.456457 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a9911b7-9775-45a3-8eba-2418c6a8c7da-client-ca" (OuterVolumeSpecName: "client-ca") pod "3a9911b7-9775-45a3-8eba-2418c6a8c7da" (UID: "3a9911b7-9775-45a3-8eba-2418c6a8c7da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.456463 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a9911b7-9775-45a3-8eba-2418c6a8c7da-config" (OuterVolumeSpecName: "config") pod "3a9911b7-9775-45a3-8eba-2418c6a8c7da" (UID: "3a9911b7-9775-45a3-8eba-2418c6a8c7da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.462901 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a9911b7-9775-45a3-8eba-2418c6a8c7da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3a9911b7-9775-45a3-8eba-2418c6a8c7da" (UID: "3a9911b7-9775-45a3-8eba-2418c6a8c7da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.463003 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a9911b7-9775-45a3-8eba-2418c6a8c7da-kube-api-access-dvn9b" (OuterVolumeSpecName: "kube-api-access-dvn9b") pod "3a9911b7-9775-45a3-8eba-2418c6a8c7da" (UID: "3a9911b7-9775-45a3-8eba-2418c6a8c7da"). InnerVolumeSpecName "kube-api-access-dvn9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.556927 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a9911b7-9775-45a3-8eba-2418c6a8c7da-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.556970 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a9911b7-9775-45a3-8eba-2418c6a8c7da-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.556983 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvn9b\" (UniqueName: \"kubernetes.io/projected/3a9911b7-9775-45a3-8eba-2418c6a8c7da-kube-api-access-dvn9b\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:44 crc kubenswrapper[4810]: I0219 15:12:44.556993 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a9911b7-9775-45a3-8eba-2418c6a8c7da-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.113714 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" event={"ID":"0b5a4ac5-f403-4492-93a9-eca271fc69cf","Type":"ContainerStarted","Data":"42468dd1baefcb60fe30e73c95955a3bb2eaf741c06d1f46bf0941fa5d8419da"} Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.114176 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.117800 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.120580 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl" event={"ID":"3a9911b7-9775-45a3-8eba-2418c6a8c7da","Type":"ContainerDied","Data":"ec358f68ac8aafcdf69584f6ce0aea9148f028a0a525c1f72e78fe0953401995"} Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.120638 4810 scope.go:117] "RemoveContainer" containerID="05732258b05d3841e954e41368cf1974cbd08c6655bd488dd5b6669cff2eff10" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.125652 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.141915 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" podStartSLOduration=9.141891669 podStartE2EDuration="9.141891669s" podCreationTimestamp="2026-02-19 15:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:12:45.132294985 +0000 UTC m=+194.614325119" watchObservedRunningTime="2026-02-19 15:12:45.141891669 +0000 UTC m=+194.623921793" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.181248 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl"] Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.185002 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-sfcvl"] Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.438364 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.449953 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a9911b7-9775-45a3-8eba-2418c6a8c7da" path="/var/lib/kubelet/pods/3a9911b7-9775-45a3-8eba-2418c6a8c7da/volumes" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.568045 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ed7553-c229-4444-9e9d-53a16d271385-kube-api-access\") pod \"b0ed7553-c229-4444-9e9d-53a16d271385\" (UID: \"b0ed7553-c229-4444-9e9d-53a16d271385\") " Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.568203 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ed7553-c229-4444-9e9d-53a16d271385-kubelet-dir\") pod \"b0ed7553-c229-4444-9e9d-53a16d271385\" (UID: \"b0ed7553-c229-4444-9e9d-53a16d271385\") " Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.568517 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ed7553-c229-4444-9e9d-53a16d271385-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b0ed7553-c229-4444-9e9d-53a16d271385" (UID: "b0ed7553-c229-4444-9e9d-53a16d271385"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.579110 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ed7553-c229-4444-9e9d-53a16d271385-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b0ed7553-c229-4444-9e9d-53a16d271385" (UID: "b0ed7553-c229-4444-9e9d-53a16d271385"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.670272 4810 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0ed7553-c229-4444-9e9d-53a16d271385-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.670319 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ed7553-c229-4444-9e9d-53a16d271385-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.822426 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c"] Feb 19 15:12:45 crc kubenswrapper[4810]: E0219 15:12:45.823226 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ed7553-c229-4444-9e9d-53a16d271385" containerName="pruner" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.823241 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ed7553-c229-4444-9e9d-53a16d271385" containerName="pruner" Feb 19 15:12:45 crc kubenswrapper[4810]: E0219 15:12:45.823253 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9911b7-9775-45a3-8eba-2418c6a8c7da" containerName="route-controller-manager" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.823261 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9911b7-9775-45a3-8eba-2418c6a8c7da" containerName="route-controller-manager" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.823423 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a9911b7-9775-45a3-8eba-2418c6a8c7da" containerName="route-controller-manager" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.823437 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ed7553-c229-4444-9e9d-53a16d271385" containerName="pruner" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.824116 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.826738 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.826743 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.833257 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c"] Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.836230 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.836232 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.836376 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.836814 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.973912 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4526d27-e1ba-4a55-b017-2f7f003521b4-client-ca\") pod \"route-controller-manager-57ffd7999b-9kf5c\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.973950 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4526d27-e1ba-4a55-b017-2f7f003521b4-serving-cert\") pod \"route-controller-manager-57ffd7999b-9kf5c\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.974459 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4526d27-e1ba-4a55-b017-2f7f003521b4-config\") pod \"route-controller-manager-57ffd7999b-9kf5c\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:45 crc kubenswrapper[4810]: I0219 15:12:45.974703 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb2f2\" (UniqueName: \"kubernetes.io/projected/f4526d27-e1ba-4a55-b017-2f7f003521b4-kube-api-access-kb2f2\") pod \"route-controller-manager-57ffd7999b-9kf5c\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:46 crc kubenswrapper[4810]: I0219 15:12:46.076121 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4526d27-e1ba-4a55-b017-2f7f003521b4-config\") pod \"route-controller-manager-57ffd7999b-9kf5c\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:46 crc kubenswrapper[4810]: I0219 15:12:46.076199 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb2f2\" (UniqueName: \"kubernetes.io/projected/f4526d27-e1ba-4a55-b017-2f7f003521b4-kube-api-access-kb2f2\") pod \"route-controller-manager-57ffd7999b-9kf5c\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:46 crc kubenswrapper[4810]: I0219 15:12:46.076241 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4526d27-e1ba-4a55-b017-2f7f003521b4-client-ca\") pod \"route-controller-manager-57ffd7999b-9kf5c\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:46 crc kubenswrapper[4810]: I0219 15:12:46.076266 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4526d27-e1ba-4a55-b017-2f7f003521b4-serving-cert\") pod \"route-controller-manager-57ffd7999b-9kf5c\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:46 crc kubenswrapper[4810]: I0219 15:12:46.077209 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4526d27-e1ba-4a55-b017-2f7f003521b4-client-ca\") pod \"route-controller-manager-57ffd7999b-9kf5c\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:46 crc kubenswrapper[4810]: I0219 15:12:46.078011 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4526d27-e1ba-4a55-b017-2f7f003521b4-config\") pod \"route-controller-manager-57ffd7999b-9kf5c\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:46 crc kubenswrapper[4810]: I0219 15:12:46.080930 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4526d27-e1ba-4a55-b017-2f7f003521b4-serving-cert\") pod \"route-controller-manager-57ffd7999b-9kf5c\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:46 crc kubenswrapper[4810]: I0219 15:12:46.096715 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb2f2\" (UniqueName: \"kubernetes.io/projected/f4526d27-e1ba-4a55-b017-2f7f003521b4-kube-api-access-kb2f2\") pod \"route-controller-manager-57ffd7999b-9kf5c\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:46 crc kubenswrapper[4810]: I0219 15:12:46.126355 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4vw" event={"ID":"3bf3315d-3d2f-4aeb-b925-c3832e102e85","Type":"ContainerStarted","Data":"077475bcb0b82366e9846158038585fb73fc53528d8e624159315394b6489745"} Feb 19 15:12:46 crc kubenswrapper[4810]: I0219 15:12:46.130729 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b0ed7553-c229-4444-9e9d-53a16d271385","Type":"ContainerDied","Data":"ca55f8aae7ec39babf30ac9cbecf6ad25f43591f6457b091b32a87ba31a0bf92"} Feb 19 15:12:46 crc kubenswrapper[4810]: I0219 15:12:46.130907 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca55f8aae7ec39babf30ac9cbecf6ad25f43591f6457b091b32a87ba31a0bf92" Feb 19 15:12:46 crc kubenswrapper[4810]: I0219 15:12:46.130968 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 15:12:46 crc kubenswrapper[4810]: I0219 15:12:46.138772 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:47 crc kubenswrapper[4810]: I0219 15:12:47.162417 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rk4vw" podStartSLOduration=3.787131675 podStartE2EDuration="47.162392236s" podCreationTimestamp="2026-02-19 15:12:00 +0000 UTC" firstStartedPulling="2026-02-19 15:12:02.244116614 +0000 UTC m=+151.726146738" lastFinishedPulling="2026-02-19 15:12:45.619377175 +0000 UTC m=+195.101407299" observedRunningTime="2026-02-19 15:12:47.16139175 +0000 UTC m=+196.643421874" watchObservedRunningTime="2026-02-19 15:12:47.162392236 +0000 UTC m=+196.644422360" Feb 19 15:12:47 crc kubenswrapper[4810]: I0219 15:12:47.224054 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c"] Feb 19 15:12:47 crc kubenswrapper[4810]: W0219 15:12:47.231458 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4526d27_e1ba_4a55_b017_2f7f003521b4.slice/crio-0fb16d39f37e006c3ff91fbde25cbf07205195d0c458afc9aea3b63ed377d307 WatchSource:0}: Error finding container 0fb16d39f37e006c3ff91fbde25cbf07205195d0c458afc9aea3b63ed377d307: Status 404 returned error can't find the container with id 0fb16d39f37e006c3ff91fbde25cbf07205195d0c458afc9aea3b63ed377d307 Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.143991 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v87ss" event={"ID":"ee54de34-1c90-401d-8102-2cc1e4116661","Type":"ContainerStarted","Data":"e76c00c840c072923d95803a179729bb3992d5d640b8cfc79a009aafae36db32"} Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.146677 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" event={"ID":"f4526d27-e1ba-4a55-b017-2f7f003521b4","Type":"ContainerStarted","Data":"0fb16d39f37e006c3ff91fbde25cbf07205195d0c458afc9aea3b63ed377d307"} Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.161039 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v87ss" podStartSLOduration=3.7826496069999997 podStartE2EDuration="45.161022307s" podCreationTimestamp="2026-02-19 15:12:03 +0000 UTC" firstStartedPulling="2026-02-19 15:12:05.541928805 +0000 UTC m=+155.023958919" lastFinishedPulling="2026-02-19 15:12:46.920301495 +0000 UTC m=+196.402331619" observedRunningTime="2026-02-19 15:12:48.160300699 +0000 UTC m=+197.642330823" watchObservedRunningTime="2026-02-19 15:12:48.161022307 +0000 UTC m=+197.643052431" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.206015 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.206681 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.212113 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.212555 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.213491 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.309944 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3788d870-2889-4190-9675-e4da44f69a71-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3788d870-2889-4190-9675-e4da44f69a71\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.309998 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3788d870-2889-4190-9675-e4da44f69a71-var-lock\") pod \"installer-9-crc\" (UID: \"3788d870-2889-4190-9675-e4da44f69a71\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.310051 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3788d870-2889-4190-9675-e4da44f69a71-kube-api-access\") pod \"installer-9-crc\" (UID: \"3788d870-2889-4190-9675-e4da44f69a71\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.411500 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3788d870-2889-4190-9675-e4da44f69a71-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3788d870-2889-4190-9675-e4da44f69a71\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.411577 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3788d870-2889-4190-9675-e4da44f69a71-var-lock\") pod \"installer-9-crc\" (UID: \"3788d870-2889-4190-9675-e4da44f69a71\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.411655 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3788d870-2889-4190-9675-e4da44f69a71-kube-api-access\") pod \"installer-9-crc\" (UID: \"3788d870-2889-4190-9675-e4da44f69a71\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.411659 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3788d870-2889-4190-9675-e4da44f69a71-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3788d870-2889-4190-9675-e4da44f69a71\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.411815 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3788d870-2889-4190-9675-e4da44f69a71-var-lock\") pod \"installer-9-crc\" (UID: \"3788d870-2889-4190-9675-e4da44f69a71\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.434670 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3788d870-2889-4190-9675-e4da44f69a71-kube-api-access\") pod \"installer-9-crc\" (UID: \"3788d870-2889-4190-9675-e4da44f69a71\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.526796 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 15:12:48 crc kubenswrapper[4810]: I0219 15:12:48.957362 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 15:12:48 crc kubenswrapper[4810]: W0219 15:12:48.967456 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3788d870_2889_4190_9675_e4da44f69a71.slice/crio-edeb9d3a80185cfad5f2d137a73e03675b14c1a6f2b489fec04d11b503684967 WatchSource:0}: Error finding container edeb9d3a80185cfad5f2d137a73e03675b14c1a6f2b489fec04d11b503684967: Status 404 returned error can't find the container with id edeb9d3a80185cfad5f2d137a73e03675b14c1a6f2b489fec04d11b503684967 Feb 19 15:12:49 crc kubenswrapper[4810]: I0219 15:12:49.152684 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3788d870-2889-4190-9675-e4da44f69a71","Type":"ContainerStarted","Data":"edeb9d3a80185cfad5f2d137a73e03675b14c1a6f2b489fec04d11b503684967"} Feb 19 15:12:49 crc kubenswrapper[4810]: I0219 15:12:49.155220 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gp8sg" event={"ID":"3146bc9a-c4fc-4aa1-acae-032db4aa0582","Type":"ContainerStarted","Data":"f7e09ee181e03814c9bffa4e23b2c1f95c88075e138b0f5ea2eafb7e14efd0cc"} Feb 19 15:12:49 crc kubenswrapper[4810]: I0219 15:12:49.157236 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" event={"ID":"f4526d27-e1ba-4a55-b017-2f7f003521b4","Type":"ContainerStarted","Data":"7f9ff27afa865c538f30b8e556f3e9f6b3f4147d3617810bab36ea8dda821ac5"} Feb 19 15:12:49 crc kubenswrapper[4810]: I0219 15:12:49.171640 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gp8sg" podStartSLOduration=3.381662295 podStartE2EDuration="46.171626385s" podCreationTimestamp="2026-02-19 15:12:03 +0000 UTC" firstStartedPulling="2026-02-19 15:12:05.537626557 +0000 UTC m=+155.019656681" lastFinishedPulling="2026-02-19 15:12:48.327590647 +0000 UTC m=+197.809620771" observedRunningTime="2026-02-19 15:12:49.168519595 +0000 UTC m=+198.650549709" watchObservedRunningTime="2026-02-19 15:12:49.171626385 +0000 UTC m=+198.653656509" Feb 19 15:12:49 crc kubenswrapper[4810]: I0219 15:12:49.188979 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" podStartSLOduration=13.18895462 podStartE2EDuration="13.18895462s" podCreationTimestamp="2026-02-19 15:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:12:49.18467298 +0000 UTC m=+198.666703104" watchObservedRunningTime="2026-02-19 15:12:49.18895462 +0000 UTC m=+198.670984744" Feb 19 15:12:49 crc kubenswrapper[4810]: I0219 15:12:49.537541 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:12:49 crc kubenswrapper[4810]: I0219 15:12:49.537599 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:12:50 crc kubenswrapper[4810]: I0219 15:12:50.164337 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3788d870-2889-4190-9675-e4da44f69a71","Type":"ContainerStarted","Data":"df933f73e8a108c6e7ee4ac92776be1551ecd1fa292247fa01da1f7c1d90d486"} Feb 19 15:12:50 crc kubenswrapper[4810]: I0219 15:12:50.164639 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:50 crc kubenswrapper[4810]: I0219 15:12:50.174015 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:50 crc kubenswrapper[4810]: I0219 15:12:50.179051 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.179038082 podStartE2EDuration="2.179038082s" podCreationTimestamp="2026-02-19 15:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:12:50.177492212 +0000 UTC m=+199.659522326" watchObservedRunningTime="2026-02-19 15:12:50.179038082 +0000 UTC m=+199.661068206" Feb 19 15:12:50 crc kubenswrapper[4810]: I0219 15:12:50.869743 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:50 crc kubenswrapper[4810]: I0219 15:12:50.869803 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:51 crc kubenswrapper[4810]: I0219 15:12:51.436269 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:51 crc kubenswrapper[4810]: I0219 15:12:51.475546 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:12:53 crc kubenswrapper[4810]: I0219 15:12:53.596186 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-dkppn" Feb 19 15:12:53 crc kubenswrapper[4810]: I0219 15:12:53.979705 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:12:53 crc kubenswrapper[4810]: I0219 15:12:53.981116 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:12:54 crc kubenswrapper[4810]: I0219 15:12:54.353135 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:12:54 crc kubenswrapper[4810]: I0219 15:12:54.353466 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:12:55 crc kubenswrapper[4810]: I0219 15:12:55.054039 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gp8sg" podUID="3146bc9a-c4fc-4aa1-acae-032db4aa0582" containerName="registry-server" probeResult="failure" output=< Feb 19 15:12:55 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 15:12:55 crc kubenswrapper[4810]: > Feb 19 15:12:55 crc kubenswrapper[4810]: I0219 15:12:55.392315 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v87ss" podUID="ee54de34-1c90-401d-8102-2cc1e4116661" containerName="registry-server" probeResult="failure" output=< Feb 19 15:12:55 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 15:12:55 crc kubenswrapper[4810]: > Feb 19 15:12:56 crc kubenswrapper[4810]: I0219 15:12:56.843932 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b7784947b-gg4f2"] Feb 19 15:12:56 crc kubenswrapper[4810]: I0219 15:12:56.844396 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" podUID="0b5a4ac5-f403-4492-93a9-eca271fc69cf" containerName="controller-manager" containerID="cri-o://42468dd1baefcb60fe30e73c95955a3bb2eaf741c06d1f46bf0941fa5d8419da" gracePeriod=30 Feb 19 15:12:56 crc kubenswrapper[4810]: I0219 15:12:56.859412 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c"] Feb 19 15:12:56 crc kubenswrapper[4810]: I0219 15:12:56.859654 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" podUID="f4526d27-e1ba-4a55-b017-2f7f003521b4" containerName="route-controller-manager" containerID="cri-o://7f9ff27afa865c538f30b8e556f3e9f6b3f4147d3617810bab36ea8dda821ac5" gracePeriod=30 Feb 19 15:12:57 crc kubenswrapper[4810]: I0219 15:12:57.968075 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.010900 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw"] Feb 19 15:12:58 crc kubenswrapper[4810]: E0219 15:12:58.011164 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4526d27-e1ba-4a55-b017-2f7f003521b4" containerName="route-controller-manager" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.012522 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4526d27-e1ba-4a55-b017-2f7f003521b4" containerName="route-controller-manager" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.012764 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4526d27-e1ba-4a55-b017-2f7f003521b4" containerName="route-controller-manager" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.013364 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.017548 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw"] Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.106012 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.150983 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4526d27-e1ba-4a55-b017-2f7f003521b4-config\") pod \"f4526d27-e1ba-4a55-b017-2f7f003521b4\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.151169 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb2f2\" (UniqueName: \"kubernetes.io/projected/f4526d27-e1ba-4a55-b017-2f7f003521b4-kube-api-access-kb2f2\") pod \"f4526d27-e1ba-4a55-b017-2f7f003521b4\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.151290 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4526d27-e1ba-4a55-b017-2f7f003521b4-client-ca\") pod \"f4526d27-e1ba-4a55-b017-2f7f003521b4\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.151310 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4526d27-e1ba-4a55-b017-2f7f003521b4-serving-cert\") pod \"f4526d27-e1ba-4a55-b017-2f7f003521b4\" (UID: \"f4526d27-e1ba-4a55-b017-2f7f003521b4\") " Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.152019 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4526d27-e1ba-4a55-b017-2f7f003521b4-client-ca" (OuterVolumeSpecName: "client-ca") pod "f4526d27-e1ba-4a55-b017-2f7f003521b4" (UID: "f4526d27-e1ba-4a55-b017-2f7f003521b4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.152036 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4526d27-e1ba-4a55-b017-2f7f003521b4-config" (OuterVolumeSpecName: "config") pod "f4526d27-e1ba-4a55-b017-2f7f003521b4" (UID: "f4526d27-e1ba-4a55-b017-2f7f003521b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.152473 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d474cf7-0ff3-43d0-88e8-19100a658851-client-ca\") pod \"route-controller-manager-6fb4659b76-cmjpw\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.152527 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d474cf7-0ff3-43d0-88e8-19100a658851-serving-cert\") pod \"route-controller-manager-6fb4659b76-cmjpw\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.152580 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d474cf7-0ff3-43d0-88e8-19100a658851-config\") pod \"route-controller-manager-6fb4659b76-cmjpw\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.152608 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxmxw\" (UniqueName: \"kubernetes.io/projected/9d474cf7-0ff3-43d0-88e8-19100a658851-kube-api-access-nxmxw\") pod \"route-controller-manager-6fb4659b76-cmjpw\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.152655 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4526d27-e1ba-4a55-b017-2f7f003521b4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.152665 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4526d27-e1ba-4a55-b017-2f7f003521b4-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.158470 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4526d27-e1ba-4a55-b017-2f7f003521b4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f4526d27-e1ba-4a55-b017-2f7f003521b4" (UID: "f4526d27-e1ba-4a55-b017-2f7f003521b4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.161013 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4526d27-e1ba-4a55-b017-2f7f003521b4-kube-api-access-kb2f2" (OuterVolumeSpecName: "kube-api-access-kb2f2") pod "f4526d27-e1ba-4a55-b017-2f7f003521b4" (UID: "f4526d27-e1ba-4a55-b017-2f7f003521b4"). InnerVolumeSpecName "kube-api-access-kb2f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.211967 4810 generic.go:334] "Generic (PLEG): container finished" podID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" containerID="2ba8cb7edb1b0d7ff70efe1c621d14ae3cd3fc1499b0b814053d764332922921" exitCode=0 Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.212058 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptbh9" event={"ID":"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53","Type":"ContainerDied","Data":"2ba8cb7edb1b0d7ff70efe1c621d14ae3cd3fc1499b0b814053d764332922921"} Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.214687 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8sn2" event={"ID":"cc8ce195-1fe1-4684-8172-e710b3552fb5","Type":"ContainerStarted","Data":"9ab517ec22370626097c33f80a892ee93279a0d966337802ec729d7271a4e233"} Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.216918 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blpmq" event={"ID":"4127fef2-ef2b-4cc4-967d-d52dac26f314","Type":"ContainerStarted","Data":"b6b451b278f70ee6f85bbefa846a37e02c57266a0a01e80c05241319be84555e"} Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.224964 4810 generic.go:334] "Generic (PLEG): container finished" podID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" containerID="471901a772079c16e7d5c328b745070f868fa529208773cf4e67ab79b2945769" exitCode=0 Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.225044 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfqg8" event={"ID":"5c654206-f2d0-4b40-9df0-577dbf27e5e4","Type":"ContainerDied","Data":"471901a772079c16e7d5c328b745070f868fa529208773cf4e67ab79b2945769"} Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.227215 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4526d27-e1ba-4a55-b017-2f7f003521b4" containerID="7f9ff27afa865c538f30b8e556f3e9f6b3f4147d3617810bab36ea8dda821ac5" exitCode=0 Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.227272 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" event={"ID":"f4526d27-e1ba-4a55-b017-2f7f003521b4","Type":"ContainerDied","Data":"7f9ff27afa865c538f30b8e556f3e9f6b3f4147d3617810bab36ea8dda821ac5"} Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.227292 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" event={"ID":"f4526d27-e1ba-4a55-b017-2f7f003521b4","Type":"ContainerDied","Data":"0fb16d39f37e006c3ff91fbde25cbf07205195d0c458afc9aea3b63ed377d307"} Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.227307 4810 scope.go:117] "RemoveContainer" containerID="7f9ff27afa865c538f30b8e556f3e9f6b3f4147d3617810bab36ea8dda821ac5" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.227431 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.229486 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5ks5" event={"ID":"9a3d6b1f-2011-4f7f-bea0-1d303007fe41","Type":"ContainerStarted","Data":"cf8a20a9712326d7f7917d432dc449c8f1126f425b48ed7c1328c76b1ca7b19c"} Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.232243 4810 generic.go:334] "Generic (PLEG): container finished" podID="0b5a4ac5-f403-4492-93a9-eca271fc69cf" containerID="42468dd1baefcb60fe30e73c95955a3bb2eaf741c06d1f46bf0941fa5d8419da" exitCode=0 Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.232266 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" event={"ID":"0b5a4ac5-f403-4492-93a9-eca271fc69cf","Type":"ContainerDied","Data":"42468dd1baefcb60fe30e73c95955a3bb2eaf741c06d1f46bf0941fa5d8419da"} Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.232280 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" event={"ID":"0b5a4ac5-f403-4492-93a9-eca271fc69cf","Type":"ContainerDied","Data":"f33a645c1b493c26e6d3ed1478c77f60dcd1bd0b429730e5b33e0b76bf136c78"} Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.232362 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b7784947b-gg4f2" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.254048 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-client-ca\") pod \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.254146 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-proxy-ca-bundles\") pod \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.254187 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpstf\" (UniqueName: \"kubernetes.io/projected/0b5a4ac5-f403-4492-93a9-eca271fc69cf-kube-api-access-zpstf\") pod \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.254222 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-config\") pod \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.254961 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5a4ac5-f403-4492-93a9-eca271fc69cf-serving-cert\") pod \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\" (UID: \"0b5a4ac5-f403-4492-93a9-eca271fc69cf\") " Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.255185 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d474cf7-0ff3-43d0-88e8-19100a658851-client-ca\") pod \"route-controller-manager-6fb4659b76-cmjpw\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.255219 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d474cf7-0ff3-43d0-88e8-19100a658851-serving-cert\") pod \"route-controller-manager-6fb4659b76-cmjpw\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.255267 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d474cf7-0ff3-43d0-88e8-19100a658851-config\") pod \"route-controller-manager-6fb4659b76-cmjpw\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.255299 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxmxw\" (UniqueName: \"kubernetes.io/projected/9d474cf7-0ff3-43d0-88e8-19100a658851-kube-api-access-nxmxw\") pod \"route-controller-manager-6fb4659b76-cmjpw\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.255371 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4526d27-e1ba-4a55-b017-2f7f003521b4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.255384 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb2f2\" (UniqueName: \"kubernetes.io/projected/f4526d27-e1ba-4a55-b017-2f7f003521b4-kube-api-access-kb2f2\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.255656 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-client-ca" (OuterVolumeSpecName: "client-ca") pod "0b5a4ac5-f403-4492-93a9-eca271fc69cf" (UID: "0b5a4ac5-f403-4492-93a9-eca271fc69cf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.256038 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0b5a4ac5-f403-4492-93a9-eca271fc69cf" (UID: "0b5a4ac5-f403-4492-93a9-eca271fc69cf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.256170 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-config" (OuterVolumeSpecName: "config") pod "0b5a4ac5-f403-4492-93a9-eca271fc69cf" (UID: "0b5a4ac5-f403-4492-93a9-eca271fc69cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.257268 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d474cf7-0ff3-43d0-88e8-19100a658851-client-ca\") pod \"route-controller-manager-6fb4659b76-cmjpw\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.259675 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d474cf7-0ff3-43d0-88e8-19100a658851-config\") pod \"route-controller-manager-6fb4659b76-cmjpw\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.259806 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5a4ac5-f403-4492-93a9-eca271fc69cf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b5a4ac5-f403-4492-93a9-eca271fc69cf" (UID: "0b5a4ac5-f403-4492-93a9-eca271fc69cf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.260691 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5a4ac5-f403-4492-93a9-eca271fc69cf-kube-api-access-zpstf" (OuterVolumeSpecName: "kube-api-access-zpstf") pod "0b5a4ac5-f403-4492-93a9-eca271fc69cf" (UID: "0b5a4ac5-f403-4492-93a9-eca271fc69cf"). InnerVolumeSpecName "kube-api-access-zpstf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.267972 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d474cf7-0ff3-43d0-88e8-19100a658851-serving-cert\") pod \"route-controller-manager-6fb4659b76-cmjpw\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.286532 4810 scope.go:117] "RemoveContainer" containerID="7f9ff27afa865c538f30b8e556f3e9f6b3f4147d3617810bab36ea8dda821ac5" Feb 19 15:12:58 crc kubenswrapper[4810]: E0219 15:12:58.287529 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f9ff27afa865c538f30b8e556f3e9f6b3f4147d3617810bab36ea8dda821ac5\": container with ID starting with 7f9ff27afa865c538f30b8e556f3e9f6b3f4147d3617810bab36ea8dda821ac5 not found: ID does not exist" containerID="7f9ff27afa865c538f30b8e556f3e9f6b3f4147d3617810bab36ea8dda821ac5" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.287638 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f9ff27afa865c538f30b8e556f3e9f6b3f4147d3617810bab36ea8dda821ac5"} err="failed to get container status \"7f9ff27afa865c538f30b8e556f3e9f6b3f4147d3617810bab36ea8dda821ac5\": rpc error: code = NotFound desc = could not find container \"7f9ff27afa865c538f30b8e556f3e9f6b3f4147d3617810bab36ea8dda821ac5\": container with ID starting with 7f9ff27afa865c538f30b8e556f3e9f6b3f4147d3617810bab36ea8dda821ac5 not found: ID does not exist" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.287695 4810 scope.go:117] "RemoveContainer" containerID="42468dd1baefcb60fe30e73c95955a3bb2eaf741c06d1f46bf0941fa5d8419da" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.291734 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxmxw\" (UniqueName: \"kubernetes.io/projected/9d474cf7-0ff3-43d0-88e8-19100a658851-kube-api-access-nxmxw\") pod \"route-controller-manager-6fb4659b76-cmjpw\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.301814 4810 scope.go:117] "RemoveContainer" containerID="42468dd1baefcb60fe30e73c95955a3bb2eaf741c06d1f46bf0941fa5d8419da" Feb 19 15:12:58 crc kubenswrapper[4810]: E0219 15:12:58.302259 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42468dd1baefcb60fe30e73c95955a3bb2eaf741c06d1f46bf0941fa5d8419da\": container with ID starting with 42468dd1baefcb60fe30e73c95955a3bb2eaf741c06d1f46bf0941fa5d8419da not found: ID does not exist" containerID="42468dd1baefcb60fe30e73c95955a3bb2eaf741c06d1f46bf0941fa5d8419da" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.302292 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42468dd1baefcb60fe30e73c95955a3bb2eaf741c06d1f46bf0941fa5d8419da"} err="failed to get container status \"42468dd1baefcb60fe30e73c95955a3bb2eaf741c06d1f46bf0941fa5d8419da\": rpc error: code = NotFound desc = could not find container \"42468dd1baefcb60fe30e73c95955a3bb2eaf741c06d1f46bf0941fa5d8419da\": container with ID starting with 42468dd1baefcb60fe30e73c95955a3bb2eaf741c06d1f46bf0941fa5d8419da not found: ID does not exist" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.331489 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.358710 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c"] Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.359369 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57ffd7999b-9kf5c"] Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.360492 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5a4ac5-f403-4492-93a9-eca271fc69cf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.360540 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.360554 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.360571 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpstf\" (UniqueName: \"kubernetes.io/projected/0b5a4ac5-f403-4492-93a9-eca271fc69cf-kube-api-access-zpstf\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.360584 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5a4ac5-f403-4492-93a9-eca271fc69cf-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.563472 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b7784947b-gg4f2"] Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.566151 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b7784947b-gg4f2"] Feb 19 15:12:58 crc kubenswrapper[4810]: I0219 15:12:58.589093 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw"] Feb 19 15:12:58 crc kubenswrapper[4810]: W0219 15:12:58.598406 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d474cf7_0ff3_43d0_88e8_19100a658851.slice/crio-61cb0e99accc7f3521da0dd967157522b71ffbbbce8920140bb606b51134702b WatchSource:0}: Error finding container 61cb0e99accc7f3521da0dd967157522b71ffbbbce8920140bb606b51134702b: Status 404 returned error can't find the container with id 61cb0e99accc7f3521da0dd967157522b71ffbbbce8920140bb606b51134702b Feb 19 15:12:59 crc kubenswrapper[4810]: I0219 15:12:59.248030 4810 generic.go:334] "Generic (PLEG): container finished" podID="4127fef2-ef2b-4cc4-967d-d52dac26f314" containerID="b6b451b278f70ee6f85bbefa846a37e02c57266a0a01e80c05241319be84555e" exitCode=0 Feb 19 15:12:59 crc kubenswrapper[4810]: I0219 15:12:59.248088 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blpmq" event={"ID":"4127fef2-ef2b-4cc4-967d-d52dac26f314","Type":"ContainerDied","Data":"b6b451b278f70ee6f85bbefa846a37e02c57266a0a01e80c05241319be84555e"} Feb 19 15:12:59 crc kubenswrapper[4810]: I0219 15:12:59.253976 4810 generic.go:334] "Generic (PLEG): container finished" podID="cc8ce195-1fe1-4684-8172-e710b3552fb5" containerID="9ab517ec22370626097c33f80a892ee93279a0d966337802ec729d7271a4e233" exitCode=0 Feb 19 15:12:59 crc kubenswrapper[4810]: I0219 15:12:59.254048 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8sn2" event={"ID":"cc8ce195-1fe1-4684-8172-e710b3552fb5","Type":"ContainerDied","Data":"9ab517ec22370626097c33f80a892ee93279a0d966337802ec729d7271a4e233"} Feb 19 15:12:59 crc kubenswrapper[4810]: I0219 15:12:59.269109 4810 generic.go:334] "Generic (PLEG): container finished" podID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" containerID="cf8a20a9712326d7f7917d432dc449c8f1126f425b48ed7c1328c76b1ca7b19c" exitCode=0 Feb 19 15:12:59 crc kubenswrapper[4810]: I0219 15:12:59.269229 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5ks5" event={"ID":"9a3d6b1f-2011-4f7f-bea0-1d303007fe41","Type":"ContainerDied","Data":"cf8a20a9712326d7f7917d432dc449c8f1126f425b48ed7c1328c76b1ca7b19c"} Feb 19 15:12:59 crc kubenswrapper[4810]: I0219 15:12:59.281550 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" event={"ID":"9d474cf7-0ff3-43d0-88e8-19100a658851","Type":"ContainerStarted","Data":"a2b82759b901660a79c47e74ce7b041d5164f2bc3f90bc7fb84fbbfb09e3a3e7"} Feb 19 15:12:59 crc kubenswrapper[4810]: I0219 15:12:59.281714 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" event={"ID":"9d474cf7-0ff3-43d0-88e8-19100a658851","Type":"ContainerStarted","Data":"61cb0e99accc7f3521da0dd967157522b71ffbbbce8920140bb606b51134702b"} Feb 19 15:12:59 crc kubenswrapper[4810]: I0219 15:12:59.281839 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:12:59 crc kubenswrapper[4810]: I0219 15:12:59.349475 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" podStartSLOduration=3.349440675 podStartE2EDuration="3.349440675s" podCreationTimestamp="2026-02-19 15:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:12:59.347280289 +0000 UTC m=+208.829310423" watchObservedRunningTime="2026-02-19 15:12:59.349440675 +0000 UTC m=+208.831470839" Feb 19 15:12:59 crc kubenswrapper[4810]: I0219 15:12:59.450196 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b5a4ac5-f403-4492-93a9-eca271fc69cf" path="/var/lib/kubelet/pods/0b5a4ac5-f403-4492-93a9-eca271fc69cf/volumes" Feb 19 15:12:59 crc kubenswrapper[4810]: I0219 15:12:59.451296 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4526d27-e1ba-4a55-b017-2f7f003521b4" path="/var/lib/kubelet/pods/f4526d27-e1ba-4a55-b017-2f7f003521b4/volumes" Feb 19 15:12:59 crc kubenswrapper[4810]: I0219 15:12:59.542386 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.829477 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-698c8bc94d-lz9st"] Feb 19 15:13:00 crc kubenswrapper[4810]: E0219 15:13:00.830041 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5a4ac5-f403-4492-93a9-eca271fc69cf" containerName="controller-manager" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.830057 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5a4ac5-f403-4492-93a9-eca271fc69cf" containerName="controller-manager" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.830222 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5a4ac5-f403-4492-93a9-eca271fc69cf" containerName="controller-manager" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.830792 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.833824 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.834275 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.834354 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.838947 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.839435 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.840193 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.840953 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-698c8bc94d-lz9st"] Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.844955 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.995159 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-config\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.995207 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-serving-cert\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.995235 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-client-ca\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.995262 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-proxy-ca-bundles\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:00 crc kubenswrapper[4810]: I0219 15:13:00.995284 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfgv7\" (UniqueName: \"kubernetes.io/projected/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-kube-api-access-bfgv7\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:01 crc kubenswrapper[4810]: I0219 15:13:01.096412 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-config\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:01 crc kubenswrapper[4810]: I0219 15:13:01.096477 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-serving-cert\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:01 crc kubenswrapper[4810]: I0219 15:13:01.096500 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-client-ca\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:01 crc kubenswrapper[4810]: I0219 15:13:01.096529 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-proxy-ca-bundles\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:01 crc kubenswrapper[4810]: I0219 15:13:01.096557 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfgv7\" (UniqueName: \"kubernetes.io/projected/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-kube-api-access-bfgv7\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:01 crc kubenswrapper[4810]: I0219 15:13:01.098353 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-config\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:01 crc kubenswrapper[4810]: I0219 15:13:01.099208 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-proxy-ca-bundles\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:01 crc kubenswrapper[4810]: I0219 15:13:01.099613 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-client-ca\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:01 crc kubenswrapper[4810]: I0219 15:13:01.103561 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-serving-cert\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:01 crc kubenswrapper[4810]: I0219 15:13:01.112260 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfgv7\" (UniqueName: \"kubernetes.io/projected/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-kube-api-access-bfgv7\") pod \"controller-manager-698c8bc94d-lz9st\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:01 crc kubenswrapper[4810]: I0219 15:13:01.150395 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:02 crc kubenswrapper[4810]: I0219 15:13:02.844157 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-698c8bc94d-lz9st"] Feb 19 15:13:03 crc kubenswrapper[4810]: I0219 15:13:03.325286 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" event={"ID":"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1","Type":"ContainerStarted","Data":"5a6aca7188bb15f6a1934c0cbf78cc5b48de2dae39ca1f408904e66c1d41c473"} Feb 19 15:13:04 crc kubenswrapper[4810]: I0219 15:13:04.021989 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:13:04 crc kubenswrapper[4810]: I0219 15:13:04.069043 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:13:04 crc kubenswrapper[4810]: I0219 15:13:04.333730 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" event={"ID":"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1","Type":"ContainerStarted","Data":"26464e252c8755e254a428e223c59bad6858b683b17cb2dde5fe0926737087fb"} Feb 19 15:13:04 crc kubenswrapper[4810]: I0219 15:13:04.334287 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:04 crc kubenswrapper[4810]: I0219 15:13:04.337573 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfqg8" event={"ID":"5c654206-f2d0-4b40-9df0-577dbf27e5e4","Type":"ContainerStarted","Data":"32a771c22d7bf5ec3610bad3dd5fe81ac2ad5a407f7d5f0293d46b10d47c380d"} Feb 19 15:13:04 crc kubenswrapper[4810]: I0219 15:13:04.340946 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:04 crc kubenswrapper[4810]: I0219 15:13:04.363530 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" podStartSLOduration=8.363459545 podStartE2EDuration="8.363459545s" podCreationTimestamp="2026-02-19 15:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:13:04.358090917 +0000 UTC m=+213.840121061" watchObservedRunningTime="2026-02-19 15:13:04.363459545 +0000 UTC m=+213.845489719" Feb 19 15:13:04 crc kubenswrapper[4810]: I0219 15:13:04.404042 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:13:04 crc kubenswrapper[4810]: I0219 15:13:04.407366 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dfqg8" podStartSLOduration=4.494835201 podStartE2EDuration="1m2.407345563s" podCreationTimestamp="2026-02-19 15:12:02 +0000 UTC" firstStartedPulling="2026-02-19 15:12:04.471998743 +0000 UTC m=+153.954028867" lastFinishedPulling="2026-02-19 15:13:02.384509065 +0000 UTC m=+211.866539229" observedRunningTime="2026-02-19 15:13:04.405369582 +0000 UTC m=+213.887399706" watchObservedRunningTime="2026-02-19 15:13:04.407345563 +0000 UTC m=+213.889375697" Feb 19 15:13:04 crc kubenswrapper[4810]: I0219 15:13:04.450469 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:13:06 crc kubenswrapper[4810]: I0219 15:13:06.872271 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v87ss"] Feb 19 15:13:06 crc kubenswrapper[4810]: I0219 15:13:06.872905 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v87ss" podUID="ee54de34-1c90-401d-8102-2cc1e4116661" containerName="registry-server" containerID="cri-o://e76c00c840c072923d95803a179729bb3992d5d640b8cfc79a009aafae36db32" gracePeriod=2 Feb 19 15:13:07 crc kubenswrapper[4810]: I0219 15:13:07.385335 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptbh9" event={"ID":"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53","Type":"ContainerStarted","Data":"3c4b22d47de57aea7d57602a01f5293afd50a4c1301de10fd9d0be527b9184bd"} Feb 19 15:13:08 crc kubenswrapper[4810]: I0219 15:13:08.408882 4810 generic.go:334] "Generic (PLEG): container finished" podID="ee54de34-1c90-401d-8102-2cc1e4116661" containerID="e76c00c840c072923d95803a179729bb3992d5d640b8cfc79a009aafae36db32" exitCode=0 Feb 19 15:13:08 crc kubenswrapper[4810]: I0219 15:13:08.408957 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v87ss" event={"ID":"ee54de34-1c90-401d-8102-2cc1e4116661","Type":"ContainerDied","Data":"e76c00c840c072923d95803a179729bb3992d5d640b8cfc79a009aafae36db32"} Feb 19 15:13:08 crc kubenswrapper[4810]: I0219 15:13:08.430396 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ptbh9" podStartSLOduration=4.887551294 podStartE2EDuration="1m6.430307787s" podCreationTimestamp="2026-02-19 15:12:02 +0000 UTC" firstStartedPulling="2026-02-19 15:12:04.406403768 +0000 UTC m=+153.888433882" lastFinishedPulling="2026-02-19 15:13:05.949160241 +0000 UTC m=+215.431190375" observedRunningTime="2026-02-19 15:13:08.428880241 +0000 UTC m=+217.910910375" watchObservedRunningTime="2026-02-19 15:13:08.430307787 +0000 UTC m=+217.912337911" Feb 19 15:13:08 crc kubenswrapper[4810]: I0219 15:13:08.521273 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:13:08 crc kubenswrapper[4810]: I0219 15:13:08.704592 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbjkj\" (UniqueName: \"kubernetes.io/projected/ee54de34-1c90-401d-8102-2cc1e4116661-kube-api-access-fbjkj\") pod \"ee54de34-1c90-401d-8102-2cc1e4116661\" (UID: \"ee54de34-1c90-401d-8102-2cc1e4116661\") " Feb 19 15:13:08 crc kubenswrapper[4810]: I0219 15:13:08.705301 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee54de34-1c90-401d-8102-2cc1e4116661-utilities\") pod \"ee54de34-1c90-401d-8102-2cc1e4116661\" (UID: \"ee54de34-1c90-401d-8102-2cc1e4116661\") " Feb 19 15:13:08 crc kubenswrapper[4810]: I0219 15:13:08.705410 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee54de34-1c90-401d-8102-2cc1e4116661-catalog-content\") pod \"ee54de34-1c90-401d-8102-2cc1e4116661\" (UID: \"ee54de34-1c90-401d-8102-2cc1e4116661\") " Feb 19 15:13:08 crc kubenswrapper[4810]: I0219 15:13:08.706145 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee54de34-1c90-401d-8102-2cc1e4116661-utilities" (OuterVolumeSpecName: "utilities") pod "ee54de34-1c90-401d-8102-2cc1e4116661" (UID: "ee54de34-1c90-401d-8102-2cc1e4116661"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:13:08 crc kubenswrapper[4810]: I0219 15:13:08.713109 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee54de34-1c90-401d-8102-2cc1e4116661-kube-api-access-fbjkj" (OuterVolumeSpecName: "kube-api-access-fbjkj") pod "ee54de34-1c90-401d-8102-2cc1e4116661" (UID: "ee54de34-1c90-401d-8102-2cc1e4116661"). InnerVolumeSpecName "kube-api-access-fbjkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:13:08 crc kubenswrapper[4810]: I0219 15:13:08.808225 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbjkj\" (UniqueName: \"kubernetes.io/projected/ee54de34-1c90-401d-8102-2cc1e4116661-kube-api-access-fbjkj\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:08 crc kubenswrapper[4810]: I0219 15:13:08.808276 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee54de34-1c90-401d-8102-2cc1e4116661-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:08 crc kubenswrapper[4810]: I0219 15:13:08.839380 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee54de34-1c90-401d-8102-2cc1e4116661-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee54de34-1c90-401d-8102-2cc1e4116661" (UID: "ee54de34-1c90-401d-8102-2cc1e4116661"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:13:08 crc kubenswrapper[4810]: I0219 15:13:08.909360 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee54de34-1c90-401d-8102-2cc1e4116661-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:09 crc kubenswrapper[4810]: I0219 15:13:09.435215 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v87ss" event={"ID":"ee54de34-1c90-401d-8102-2cc1e4116661","Type":"ContainerDied","Data":"e9e0d068483405f5413181ac3893890a639ed339e87615e73aeb8a60ceb19c52"} Feb 19 15:13:09 crc kubenswrapper[4810]: I0219 15:13:09.435289 4810 scope.go:117] "RemoveContainer" containerID="e76c00c840c072923d95803a179729bb3992d5d640b8cfc79a009aafae36db32" Feb 19 15:13:09 crc kubenswrapper[4810]: I0219 15:13:09.435443 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v87ss" Feb 19 15:13:09 crc kubenswrapper[4810]: I0219 15:13:09.474314 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v87ss"] Feb 19 15:13:09 crc kubenswrapper[4810]: I0219 15:13:09.480379 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v87ss"] Feb 19 15:13:09 crc kubenswrapper[4810]: I0219 15:13:09.547160 4810 scope.go:117] "RemoveContainer" containerID="d8d3ab6d086deece4cdefec9736b01ba8f86997b96e710debe90df076af13fd1" Feb 19 15:13:10 crc kubenswrapper[4810]: I0219 15:13:10.449516 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8sn2" event={"ID":"cc8ce195-1fe1-4684-8172-e710b3552fb5","Type":"ContainerStarted","Data":"266b7a0eecc4ea263050674e9b0fa34134a650122976b06935949729514b8110"} Feb 19 15:13:10 crc kubenswrapper[4810]: I0219 15:13:10.475509 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x8sn2" podStartSLOduration=5.383213754 podStartE2EDuration="1m10.47547664s" podCreationTimestamp="2026-02-19 15:12:00 +0000 UTC" firstStartedPulling="2026-02-19 15:12:03.323521368 +0000 UTC m=+152.805551492" lastFinishedPulling="2026-02-19 15:13:08.415784254 +0000 UTC m=+217.897814378" observedRunningTime="2026-02-19 15:13:10.472823451 +0000 UTC m=+219.954853575" watchObservedRunningTime="2026-02-19 15:13:10.47547664 +0000 UTC m=+219.957506764" Feb 19 15:13:10 crc kubenswrapper[4810]: I0219 15:13:10.644972 4810 scope.go:117] "RemoveContainer" containerID="14425fa062e8aa4022f788165f140217518c2e7e2510e6d081b098762154d7a3" Feb 19 15:13:11 crc kubenswrapper[4810]: I0219 15:13:11.354762 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:13:11 crc kubenswrapper[4810]: I0219 15:13:11.354854 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:13:11 crc kubenswrapper[4810]: I0219 15:13:11.450255 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee54de34-1c90-401d-8102-2cc1e4116661" path="/var/lib/kubelet/pods/ee54de34-1c90-401d-8102-2cc1e4116661/volumes" Feb 19 15:13:12 crc kubenswrapper[4810]: I0219 15:13:12.413544 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-x8sn2" podUID="cc8ce195-1fe1-4684-8172-e710b3552fb5" containerName="registry-server" probeResult="failure" output=< Feb 19 15:13:12 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 15:13:12 crc kubenswrapper[4810]: > Feb 19 15:13:12 crc kubenswrapper[4810]: I0219 15:13:12.473540 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blpmq" event={"ID":"4127fef2-ef2b-4cc4-967d-d52dac26f314","Type":"ContainerStarted","Data":"6cc469c93c48a9ab5d144e46acaa99a2b78d56e0e2fbdf938f36668cd8a62128"} Feb 19 15:13:12 crc kubenswrapper[4810]: I0219 15:13:12.477107 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5ks5" event={"ID":"9a3d6b1f-2011-4f7f-bea0-1d303007fe41","Type":"ContainerStarted","Data":"6d399395ed59b272e5ce80489e65062cb3d89e0840588f5165efb67b6a223708"} Feb 19 15:13:12 crc kubenswrapper[4810]: I0219 15:13:12.514765 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-blpmq" podStartSLOduration=3.8235967 podStartE2EDuration="1m12.514731001s" podCreationTimestamp="2026-02-19 15:12:00 +0000 UTC" firstStartedPulling="2026-02-19 15:12:03.341070351 +0000 UTC m=+152.823100465" lastFinishedPulling="2026-02-19 15:13:12.032204642 +0000 UTC m=+221.514234766" observedRunningTime="2026-02-19 15:13:12.511218451 +0000 UTC m=+221.993248585" watchObservedRunningTime="2026-02-19 15:13:12.514731001 +0000 UTC m=+221.996761125" Feb 19 15:13:12 crc kubenswrapper[4810]: I0219 15:13:12.536990 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d5ks5" podStartSLOduration=3.881865864 podStartE2EDuration="1m12.536966772s" podCreationTimestamp="2026-02-19 15:12:00 +0000 UTC" firstStartedPulling="2026-02-19 15:12:03.383647052 +0000 UTC m=+152.865677176" lastFinishedPulling="2026-02-19 15:13:12.03874794 +0000 UTC m=+221.520778084" observedRunningTime="2026-02-19 15:13:12.536441209 +0000 UTC m=+222.018471333" watchObservedRunningTime="2026-02-19 15:13:12.536966772 +0000 UTC m=+222.018996896" Feb 19 15:13:12 crc kubenswrapper[4810]: I0219 15:13:12.691158 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r74mv"] Feb 19 15:13:12 crc kubenswrapper[4810]: I0219 15:13:12.727722 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:13:12 crc kubenswrapper[4810]: I0219 15:13:12.728202 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:13:12 crc kubenswrapper[4810]: I0219 15:13:12.793122 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:13:13 crc kubenswrapper[4810]: I0219 15:13:13.221483 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:13:13 crc kubenswrapper[4810]: I0219 15:13:13.221545 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:13:13 crc kubenswrapper[4810]: I0219 15:13:13.275529 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:13:13 crc kubenswrapper[4810]: I0219 15:13:13.528591 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:13:13 crc kubenswrapper[4810]: I0219 15:13:13.533098 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:13:16 crc kubenswrapper[4810]: I0219 15:13:16.878134 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfqg8"] Feb 19 15:13:16 crc kubenswrapper[4810]: I0219 15:13:16.878632 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dfqg8" podUID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" containerName="registry-server" containerID="cri-o://32a771c22d7bf5ec3610bad3dd5fe81ac2ad5a407f7d5f0293d46b10d47c380d" gracePeriod=2 Feb 19 15:13:16 crc kubenswrapper[4810]: I0219 15:13:16.913187 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-698c8bc94d-lz9st"] Feb 19 15:13:16 crc kubenswrapper[4810]: I0219 15:13:16.913549 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" podUID="9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1" containerName="controller-manager" containerID="cri-o://26464e252c8755e254a428e223c59bad6858b683b17cb2dde5fe0926737087fb" gracePeriod=30 Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.004879 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw"] Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.005165 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" podUID="9d474cf7-0ff3-43d0-88e8-19100a658851" containerName="route-controller-manager" containerID="cri-o://a2b82759b901660a79c47e74ce7b041d5164f2bc3f90bc7fb84fbbfb09e3a3e7" gracePeriod=30 Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.512434 4810 generic.go:334] "Generic (PLEG): container finished" podID="9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1" containerID="26464e252c8755e254a428e223c59bad6858b683b17cb2dde5fe0926737087fb" exitCode=0 Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.512534 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" event={"ID":"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1","Type":"ContainerDied","Data":"26464e252c8755e254a428e223c59bad6858b683b17cb2dde5fe0926737087fb"} Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.515655 4810 generic.go:334] "Generic (PLEG): container finished" podID="9d474cf7-0ff3-43d0-88e8-19100a658851" containerID="a2b82759b901660a79c47e74ce7b041d5164f2bc3f90bc7fb84fbbfb09e3a3e7" exitCode=0 Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.515780 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" event={"ID":"9d474cf7-0ff3-43d0-88e8-19100a658851","Type":"ContainerDied","Data":"a2b82759b901660a79c47e74ce7b041d5164f2bc3f90bc7fb84fbbfb09e3a3e7"} Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.515828 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" event={"ID":"9d474cf7-0ff3-43d0-88e8-19100a658851","Type":"ContainerDied","Data":"61cb0e99accc7f3521da0dd967157522b71ffbbbce8920140bb606b51134702b"} Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.515851 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61cb0e99accc7f3521da0dd967157522b71ffbbbce8920140bb606b51134702b" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.518597 4810 generic.go:334] "Generic (PLEG): container finished" podID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" containerID="32a771c22d7bf5ec3610bad3dd5fe81ac2ad5a407f7d5f0293d46b10d47c380d" exitCode=0 Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.518648 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfqg8" event={"ID":"5c654206-f2d0-4b40-9df0-577dbf27e5e4","Type":"ContainerDied","Data":"32a771c22d7bf5ec3610bad3dd5fe81ac2ad5a407f7d5f0293d46b10d47c380d"} Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.518678 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfqg8" event={"ID":"5c654206-f2d0-4b40-9df0-577dbf27e5e4","Type":"ContainerDied","Data":"673ebc55d1a6b447cd1eff3908534a0236e7f9bed8a79855c901abb60e30a35e"} Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.518689 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="673ebc55d1a6b447cd1eff3908534a0236e7f9bed8a79855c901abb60e30a35e" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.548984 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.557591 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.562395 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4nld\" (UniqueName: \"kubernetes.io/projected/5c654206-f2d0-4b40-9df0-577dbf27e5e4-kube-api-access-r4nld\") pod \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\" (UID: \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\") " Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.562452 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c654206-f2d0-4b40-9df0-577dbf27e5e4-catalog-content\") pod \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\" (UID: \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\") " Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.562501 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c654206-f2d0-4b40-9df0-577dbf27e5e4-utilities\") pod \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\" (UID: \"5c654206-f2d0-4b40-9df0-577dbf27e5e4\") " Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.562518 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxmxw\" (UniqueName: \"kubernetes.io/projected/9d474cf7-0ff3-43d0-88e8-19100a658851-kube-api-access-nxmxw\") pod \"9d474cf7-0ff3-43d0-88e8-19100a658851\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.562546 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d474cf7-0ff3-43d0-88e8-19100a658851-serving-cert\") pod \"9d474cf7-0ff3-43d0-88e8-19100a658851\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.562566 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d474cf7-0ff3-43d0-88e8-19100a658851-client-ca\") pod \"9d474cf7-0ff3-43d0-88e8-19100a658851\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.562586 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d474cf7-0ff3-43d0-88e8-19100a658851-config\") pod \"9d474cf7-0ff3-43d0-88e8-19100a658851\" (UID: \"9d474cf7-0ff3-43d0-88e8-19100a658851\") " Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.563471 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d474cf7-0ff3-43d0-88e8-19100a658851-client-ca" (OuterVolumeSpecName: "client-ca") pod "9d474cf7-0ff3-43d0-88e8-19100a658851" (UID: "9d474cf7-0ff3-43d0-88e8-19100a658851"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.563499 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d474cf7-0ff3-43d0-88e8-19100a658851-config" (OuterVolumeSpecName: "config") pod "9d474cf7-0ff3-43d0-88e8-19100a658851" (UID: "9d474cf7-0ff3-43d0-88e8-19100a658851"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.563500 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c654206-f2d0-4b40-9df0-577dbf27e5e4-utilities" (OuterVolumeSpecName: "utilities") pod "5c654206-f2d0-4b40-9df0-577dbf27e5e4" (UID: "5c654206-f2d0-4b40-9df0-577dbf27e5e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.571419 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c654206-f2d0-4b40-9df0-577dbf27e5e4-kube-api-access-r4nld" (OuterVolumeSpecName: "kube-api-access-r4nld") pod "5c654206-f2d0-4b40-9df0-577dbf27e5e4" (UID: "5c654206-f2d0-4b40-9df0-577dbf27e5e4"). InnerVolumeSpecName "kube-api-access-r4nld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.575931 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d474cf7-0ff3-43d0-88e8-19100a658851-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d474cf7-0ff3-43d0-88e8-19100a658851" (UID: "9d474cf7-0ff3-43d0-88e8-19100a658851"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.594778 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d474cf7-0ff3-43d0-88e8-19100a658851-kube-api-access-nxmxw" (OuterVolumeSpecName: "kube-api-access-nxmxw") pod "9d474cf7-0ff3-43d0-88e8-19100a658851" (UID: "9d474cf7-0ff3-43d0-88e8-19100a658851"). InnerVolumeSpecName "kube-api-access-nxmxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.603954 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c654206-f2d0-4b40-9df0-577dbf27e5e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c654206-f2d0-4b40-9df0-577dbf27e5e4" (UID: "5c654206-f2d0-4b40-9df0-577dbf27e5e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.662477 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.663313 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c654206-f2d0-4b40-9df0-577dbf27e5e4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.663347 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxmxw\" (UniqueName: \"kubernetes.io/projected/9d474cf7-0ff3-43d0-88e8-19100a658851-kube-api-access-nxmxw\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.663358 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d474cf7-0ff3-43d0-88e8-19100a658851-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.663366 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d474cf7-0ff3-43d0-88e8-19100a658851-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.663374 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d474cf7-0ff3-43d0-88e8-19100a658851-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.663383 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4nld\" (UniqueName: \"kubernetes.io/projected/5c654206-f2d0-4b40-9df0-577dbf27e5e4-kube-api-access-r4nld\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.663390 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c654206-f2d0-4b40-9df0-577dbf27e5e4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.764797 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-client-ca\") pod \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.764854 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfgv7\" (UniqueName: \"kubernetes.io/projected/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-kube-api-access-bfgv7\") pod \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.764880 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-proxy-ca-bundles\") pod \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.764926 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-serving-cert\") pod \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.764959 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-config\") pod \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\" (UID: \"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1\") " Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.765672 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-client-ca" (OuterVolumeSpecName: "client-ca") pod "9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1" (UID: "9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.765765 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-config" (OuterVolumeSpecName: "config") pod "9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1" (UID: "9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.765874 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1" (UID: "9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.769797 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1" (UID: "9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.769830 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-kube-api-access-bfgv7" (OuterVolumeSpecName: "kube-api-access-bfgv7") pod "9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1" (UID: "9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1"). InnerVolumeSpecName "kube-api-access-bfgv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.866433 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.866497 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfgv7\" (UniqueName: \"kubernetes.io/projected/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-kube-api-access-bfgv7\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.866514 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.866524 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:17 crc kubenswrapper[4810]: I0219 15:13:17.866535 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.528718 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" event={"ID":"9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1","Type":"ContainerDied","Data":"5a6aca7188bb15f6a1934c0cbf78cc5b48de2dae39ca1f408904e66c1d41c473"} Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.528819 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-698c8bc94d-lz9st" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.528828 4810 scope.go:117] "RemoveContainer" containerID="26464e252c8755e254a428e223c59bad6858b683b17cb2dde5fe0926737087fb" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.528751 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dfqg8" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.530469 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.575107 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfqg8"] Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.586600 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfqg8"] Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.594865 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-698c8bc94d-lz9st"] Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.601412 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-698c8bc94d-lz9st"] Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.607911 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw"] Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.613086 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb4659b76-cmjpw"] Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.844010 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67"] Feb 19 15:13:18 crc kubenswrapper[4810]: E0219 15:13:18.846703 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" containerName="registry-server" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.846751 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" containerName="registry-server" Feb 19 15:13:18 crc kubenswrapper[4810]: E0219 15:13:18.846782 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee54de34-1c90-401d-8102-2cc1e4116661" containerName="extract-content" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.846798 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee54de34-1c90-401d-8102-2cc1e4116661" containerName="extract-content" Feb 19 15:13:18 crc kubenswrapper[4810]: E0219 15:13:18.846816 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1" containerName="controller-manager" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.846834 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1" containerName="controller-manager" Feb 19 15:13:18 crc kubenswrapper[4810]: E0219 15:13:18.846864 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" containerName="extract-content" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.846883 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" containerName="extract-content" Feb 19 15:13:18 crc kubenswrapper[4810]: E0219 15:13:18.846918 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee54de34-1c90-401d-8102-2cc1e4116661" containerName="extract-utilities" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.846934 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee54de34-1c90-401d-8102-2cc1e4116661" containerName="extract-utilities" Feb 19 15:13:18 crc kubenswrapper[4810]: E0219 15:13:18.846956 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d474cf7-0ff3-43d0-88e8-19100a658851" containerName="route-controller-manager" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.846972 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d474cf7-0ff3-43d0-88e8-19100a658851" containerName="route-controller-manager" Feb 19 15:13:18 crc kubenswrapper[4810]: E0219 15:13:18.847006 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" containerName="extract-utilities" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.847021 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" containerName="extract-utilities" Feb 19 15:13:18 crc kubenswrapper[4810]: E0219 15:13:18.847039 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee54de34-1c90-401d-8102-2cc1e4116661" containerName="registry-server" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.847055 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee54de34-1c90-401d-8102-2cc1e4116661" containerName="registry-server" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.847280 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee54de34-1c90-401d-8102-2cc1e4116661" containerName="registry-server" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.847321 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d474cf7-0ff3-43d0-88e8-19100a658851" containerName="route-controller-manager" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.851237 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1" containerName="controller-manager" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.851265 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" containerName="registry-server" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.852181 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d6d66b55f-52rdb"] Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.852494 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.853491 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.855159 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.856027 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6d66b55f-52rdb"] Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.856203 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.856361 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.858246 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.858295 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.858590 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.858774 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.858910 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.859085 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.859612 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.861980 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.864042 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.868947 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67"] Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.869872 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.982259 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xw4x\" (UniqueName: \"kubernetes.io/projected/af9230b1-4c3c-46d1-9513-6b7b0234f7f9-kube-api-access-8xw4x\") pod \"route-controller-manager-5d79b8558f-lfs67\" (UID: \"af9230b1-4c3c-46d1-9513-6b7b0234f7f9\") " pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.982364 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/581f2a9a-402b-45f5-a6e1-572fe1ccd601-proxy-ca-bundles\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.982526 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581f2a9a-402b-45f5-a6e1-572fe1ccd601-config\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.982707 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/581f2a9a-402b-45f5-a6e1-572fe1ccd601-serving-cert\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.982755 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llzqr\" (UniqueName: \"kubernetes.io/projected/581f2a9a-402b-45f5-a6e1-572fe1ccd601-kube-api-access-llzqr\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.982791 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af9230b1-4c3c-46d1-9513-6b7b0234f7f9-config\") pod \"route-controller-manager-5d79b8558f-lfs67\" (UID: \"af9230b1-4c3c-46d1-9513-6b7b0234f7f9\") " pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.982823 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af9230b1-4c3c-46d1-9513-6b7b0234f7f9-serving-cert\") pod \"route-controller-manager-5d79b8558f-lfs67\" (UID: \"af9230b1-4c3c-46d1-9513-6b7b0234f7f9\") " pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.982853 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/581f2a9a-402b-45f5-a6e1-572fe1ccd601-client-ca\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:18 crc kubenswrapper[4810]: I0219 15:13:18.982900 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af9230b1-4c3c-46d1-9513-6b7b0234f7f9-client-ca\") pod \"route-controller-manager-5d79b8558f-lfs67\" (UID: \"af9230b1-4c3c-46d1-9513-6b7b0234f7f9\") " pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.083742 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xw4x\" (UniqueName: \"kubernetes.io/projected/af9230b1-4c3c-46d1-9513-6b7b0234f7f9-kube-api-access-8xw4x\") pod \"route-controller-manager-5d79b8558f-lfs67\" (UID: \"af9230b1-4c3c-46d1-9513-6b7b0234f7f9\") " pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.083800 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/581f2a9a-402b-45f5-a6e1-572fe1ccd601-proxy-ca-bundles\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.083858 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581f2a9a-402b-45f5-a6e1-572fe1ccd601-config\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.083918 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/581f2a9a-402b-45f5-a6e1-572fe1ccd601-serving-cert\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.083940 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af9230b1-4c3c-46d1-9513-6b7b0234f7f9-config\") pod \"route-controller-manager-5d79b8558f-lfs67\" (UID: \"af9230b1-4c3c-46d1-9513-6b7b0234f7f9\") " pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.083963 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llzqr\" (UniqueName: \"kubernetes.io/projected/581f2a9a-402b-45f5-a6e1-572fe1ccd601-kube-api-access-llzqr\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.083988 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af9230b1-4c3c-46d1-9513-6b7b0234f7f9-serving-cert\") pod \"route-controller-manager-5d79b8558f-lfs67\" (UID: \"af9230b1-4c3c-46d1-9513-6b7b0234f7f9\") " pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.084009 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/581f2a9a-402b-45f5-a6e1-572fe1ccd601-client-ca\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.084030 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af9230b1-4c3c-46d1-9513-6b7b0234f7f9-client-ca\") pod \"route-controller-manager-5d79b8558f-lfs67\" (UID: \"af9230b1-4c3c-46d1-9513-6b7b0234f7f9\") " pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.085424 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af9230b1-4c3c-46d1-9513-6b7b0234f7f9-client-ca\") pod \"route-controller-manager-5d79b8558f-lfs67\" (UID: \"af9230b1-4c3c-46d1-9513-6b7b0234f7f9\") " pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.085530 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af9230b1-4c3c-46d1-9513-6b7b0234f7f9-config\") pod \"route-controller-manager-5d79b8558f-lfs67\" (UID: \"af9230b1-4c3c-46d1-9513-6b7b0234f7f9\") " pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.085646 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/581f2a9a-402b-45f5-a6e1-572fe1ccd601-proxy-ca-bundles\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.086150 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/581f2a9a-402b-45f5-a6e1-572fe1ccd601-client-ca\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.086309 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581f2a9a-402b-45f5-a6e1-572fe1ccd601-config\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.088123 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af9230b1-4c3c-46d1-9513-6b7b0234f7f9-serving-cert\") pod \"route-controller-manager-5d79b8558f-lfs67\" (UID: \"af9230b1-4c3c-46d1-9513-6b7b0234f7f9\") " pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.088341 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/581f2a9a-402b-45f5-a6e1-572fe1ccd601-serving-cert\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.107967 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xw4x\" (UniqueName: \"kubernetes.io/projected/af9230b1-4c3c-46d1-9513-6b7b0234f7f9-kube-api-access-8xw4x\") pod \"route-controller-manager-5d79b8558f-lfs67\" (UID: \"af9230b1-4c3c-46d1-9513-6b7b0234f7f9\") " pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.111033 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llzqr\" (UniqueName: \"kubernetes.io/projected/581f2a9a-402b-45f5-a6e1-572fe1ccd601-kube-api-access-llzqr\") pod \"controller-manager-d6d66b55f-52rdb\" (UID: \"581f2a9a-402b-45f5-a6e1-572fe1ccd601\") " pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.202755 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.224861 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.447364 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c654206-f2d0-4b40-9df0-577dbf27e5e4" path="/var/lib/kubelet/pods/5c654206-f2d0-4b40-9df0-577dbf27e5e4/volumes" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.448731 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d474cf7-0ff3-43d0-88e8-19100a658851" path="/var/lib/kubelet/pods/9d474cf7-0ff3-43d0-88e8-19100a658851/volumes" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.449554 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1" path="/var/lib/kubelet/pods/9fab1a62-cdc3-4a2c-ab2d-dbd8f1f097f1/volumes" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.538148 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.538232 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.538344 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.539230 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.539301 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651" gracePeriod=600 Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.720267 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67"] Feb 19 15:13:19 crc kubenswrapper[4810]: I0219 15:13:19.724517 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6d66b55f-52rdb"] Feb 19 15:13:19 crc kubenswrapper[4810]: W0219 15:13:19.736144 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf9230b1_4c3c_46d1_9513_6b7b0234f7f9.slice/crio-10e92dccf46416517ce6b2a1fb6279117989063ab1a528462ce3408ea940dd8f WatchSource:0}: Error finding container 10e92dccf46416517ce6b2a1fb6279117989063ab1a528462ce3408ea940dd8f: Status 404 returned error can't find the container with id 10e92dccf46416517ce6b2a1fb6279117989063ab1a528462ce3408ea940dd8f Feb 19 15:13:20 crc kubenswrapper[4810]: I0219 15:13:20.543952 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" event={"ID":"581f2a9a-402b-45f5-a6e1-572fe1ccd601","Type":"ContainerStarted","Data":"666d5b4bfd03ae8416fff90e79d71d92d336b908d952756de1f2eb7e95e48118"} Feb 19 15:13:20 crc kubenswrapper[4810]: I0219 15:13:20.545704 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" event={"ID":"581f2a9a-402b-45f5-a6e1-572fe1ccd601","Type":"ContainerStarted","Data":"482664dd86c2e2adfb359f5beb6bef16aa9a095fa74e70cf41969a4b378d1e2c"} Feb 19 15:13:20 crc kubenswrapper[4810]: I0219 15:13:20.546865 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" event={"ID":"af9230b1-4c3c-46d1-9513-6b7b0234f7f9","Type":"ContainerStarted","Data":"c011eb13462d97403c9e0803b312a77eecf8646ceae2b042da0cb35dcbe87d08"} Feb 19 15:13:20 crc kubenswrapper[4810]: I0219 15:13:20.546913 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:20 crc kubenswrapper[4810]: I0219 15:13:20.546933 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:20 crc kubenswrapper[4810]: I0219 15:13:20.546944 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" event={"ID":"af9230b1-4c3c-46d1-9513-6b7b0234f7f9","Type":"ContainerStarted","Data":"10e92dccf46416517ce6b2a1fb6279117989063ab1a528462ce3408ea940dd8f"} Feb 19 15:13:20 crc kubenswrapper[4810]: I0219 15:13:20.553140 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651" exitCode=0 Feb 19 15:13:20 crc kubenswrapper[4810]: I0219 15:13:20.553193 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651"} Feb 19 15:13:20 crc kubenswrapper[4810]: I0219 15:13:20.553240 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"4a50af848183862ac97eb1221e89b46a5711d55d54c9bd96026946aef1766658"} Feb 19 15:13:20 crc kubenswrapper[4810]: I0219 15:13:20.563277 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" Feb 19 15:13:20 crc kubenswrapper[4810]: I0219 15:13:20.569762 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" Feb 19 15:13:20 crc kubenswrapper[4810]: I0219 15:13:20.571152 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d6d66b55f-52rdb" podStartSLOduration=4.571125458 podStartE2EDuration="4.571125458s" podCreationTimestamp="2026-02-19 15:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:13:20.565922825 +0000 UTC m=+230.047952969" watchObservedRunningTime="2026-02-19 15:13:20.571125458 +0000 UTC m=+230.053155582" Feb 19 15:13:20 crc kubenswrapper[4810]: I0219 15:13:20.609064 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d79b8558f-lfs67" podStartSLOduration=3.609037712 podStartE2EDuration="3.609037712s" podCreationTimestamp="2026-02-19 15:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:13:20.607621056 +0000 UTC m=+230.089651180" watchObservedRunningTime="2026-02-19 15:13:20.609037712 +0000 UTC m=+230.091067836" Feb 19 15:13:21 crc kubenswrapper[4810]: I0219 15:13:21.020348 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:13:21 crc kubenswrapper[4810]: I0219 15:13:21.020577 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:13:21 crc kubenswrapper[4810]: I0219 15:13:21.059094 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:13:21 crc kubenswrapper[4810]: I0219 15:13:21.136214 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:13:21 crc kubenswrapper[4810]: I0219 15:13:21.136272 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:13:21 crc kubenswrapper[4810]: I0219 15:13:21.176583 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:13:21 crc kubenswrapper[4810]: I0219 15:13:21.412355 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:13:21 crc kubenswrapper[4810]: I0219 15:13:21.458407 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:13:21 crc kubenswrapper[4810]: I0219 15:13:21.613682 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:13:21 crc kubenswrapper[4810]: I0219 15:13:21.631192 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.078574 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x8sn2"] Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.079015 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x8sn2" podUID="cc8ce195-1fe1-4684-8172-e710b3552fb5" containerName="registry-server" containerID="cri-o://266b7a0eecc4ea263050674e9b0fa34134a650122976b06935949729514b8110" gracePeriod=2 Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.535460 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.576606 4810 generic.go:334] "Generic (PLEG): container finished" podID="cc8ce195-1fe1-4684-8172-e710b3552fb5" containerID="266b7a0eecc4ea263050674e9b0fa34134a650122976b06935949729514b8110" exitCode=0 Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.576729 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8sn2" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.576722 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8sn2" event={"ID":"cc8ce195-1fe1-4684-8172-e710b3552fb5","Type":"ContainerDied","Data":"266b7a0eecc4ea263050674e9b0fa34134a650122976b06935949729514b8110"} Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.577196 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8sn2" event={"ID":"cc8ce195-1fe1-4684-8172-e710b3552fb5","Type":"ContainerDied","Data":"b765fd7f2fdc89df238064ee08672584bc5ff6e6cd24e9a1fe430dad87064297"} Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.577225 4810 scope.go:117] "RemoveContainer" containerID="266b7a0eecc4ea263050674e9b0fa34134a650122976b06935949729514b8110" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.598675 4810 scope.go:117] "RemoveContainer" containerID="9ab517ec22370626097c33f80a892ee93279a0d966337802ec729d7271a4e233" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.617719 4810 scope.go:117] "RemoveContainer" containerID="5c922635ea487f2a0968c5eace1227e4dcace0fce70a7ca4e66511c946968284" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.640526 4810 scope.go:117] "RemoveContainer" containerID="266b7a0eecc4ea263050674e9b0fa34134a650122976b06935949729514b8110" Feb 19 15:13:23 crc kubenswrapper[4810]: E0219 15:13:23.641166 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"266b7a0eecc4ea263050674e9b0fa34134a650122976b06935949729514b8110\": container with ID starting with 266b7a0eecc4ea263050674e9b0fa34134a650122976b06935949729514b8110 not found: ID does not exist" containerID="266b7a0eecc4ea263050674e9b0fa34134a650122976b06935949729514b8110" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.641202 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"266b7a0eecc4ea263050674e9b0fa34134a650122976b06935949729514b8110"} err="failed to get container status \"266b7a0eecc4ea263050674e9b0fa34134a650122976b06935949729514b8110\": rpc error: code = NotFound desc = could not find container \"266b7a0eecc4ea263050674e9b0fa34134a650122976b06935949729514b8110\": container with ID starting with 266b7a0eecc4ea263050674e9b0fa34134a650122976b06935949729514b8110 not found: ID does not exist" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.641226 4810 scope.go:117] "RemoveContainer" containerID="9ab517ec22370626097c33f80a892ee93279a0d966337802ec729d7271a4e233" Feb 19 15:13:23 crc kubenswrapper[4810]: E0219 15:13:23.641719 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ab517ec22370626097c33f80a892ee93279a0d966337802ec729d7271a4e233\": container with ID starting with 9ab517ec22370626097c33f80a892ee93279a0d966337802ec729d7271a4e233 not found: ID does not exist" containerID="9ab517ec22370626097c33f80a892ee93279a0d966337802ec729d7271a4e233" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.641746 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ab517ec22370626097c33f80a892ee93279a0d966337802ec729d7271a4e233"} err="failed to get container status \"9ab517ec22370626097c33f80a892ee93279a0d966337802ec729d7271a4e233\": rpc error: code = NotFound desc = could not find container \"9ab517ec22370626097c33f80a892ee93279a0d966337802ec729d7271a4e233\": container with ID starting with 9ab517ec22370626097c33f80a892ee93279a0d966337802ec729d7271a4e233 not found: ID does not exist" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.641761 4810 scope.go:117] "RemoveContainer" containerID="5c922635ea487f2a0968c5eace1227e4dcace0fce70a7ca4e66511c946968284" Feb 19 15:13:23 crc kubenswrapper[4810]: E0219 15:13:23.642104 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c922635ea487f2a0968c5eace1227e4dcace0fce70a7ca4e66511c946968284\": container with ID starting with 5c922635ea487f2a0968c5eace1227e4dcace0fce70a7ca4e66511c946968284 not found: ID does not exist" containerID="5c922635ea487f2a0968c5eace1227e4dcace0fce70a7ca4e66511c946968284" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.642124 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c922635ea487f2a0968c5eace1227e4dcace0fce70a7ca4e66511c946968284"} err="failed to get container status \"5c922635ea487f2a0968c5eace1227e4dcace0fce70a7ca4e66511c946968284\": rpc error: code = NotFound desc = could not find container \"5c922635ea487f2a0968c5eace1227e4dcace0fce70a7ca4e66511c946968284\": container with ID starting with 5c922635ea487f2a0968c5eace1227e4dcace0fce70a7ca4e66511c946968284 not found: ID does not exist" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.651835 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8ce195-1fe1-4684-8172-e710b3552fb5-catalog-content\") pod \"cc8ce195-1fe1-4684-8172-e710b3552fb5\" (UID: \"cc8ce195-1fe1-4684-8172-e710b3552fb5\") " Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.652090 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8ce195-1fe1-4684-8172-e710b3552fb5-utilities\") pod \"cc8ce195-1fe1-4684-8172-e710b3552fb5\" (UID: \"cc8ce195-1fe1-4684-8172-e710b3552fb5\") " Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.652161 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xzxn\" (UniqueName: \"kubernetes.io/projected/cc8ce195-1fe1-4684-8172-e710b3552fb5-kube-api-access-4xzxn\") pod \"cc8ce195-1fe1-4684-8172-e710b3552fb5\" (UID: \"cc8ce195-1fe1-4684-8172-e710b3552fb5\") " Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.653334 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8ce195-1fe1-4684-8172-e710b3552fb5-utilities" (OuterVolumeSpecName: "utilities") pod "cc8ce195-1fe1-4684-8172-e710b3552fb5" (UID: "cc8ce195-1fe1-4684-8172-e710b3552fb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.661292 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8ce195-1fe1-4684-8172-e710b3552fb5-kube-api-access-4xzxn" (OuterVolumeSpecName: "kube-api-access-4xzxn") pod "cc8ce195-1fe1-4684-8172-e710b3552fb5" (UID: "cc8ce195-1fe1-4684-8172-e710b3552fb5"). InnerVolumeSpecName "kube-api-access-4xzxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.719553 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8ce195-1fe1-4684-8172-e710b3552fb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc8ce195-1fe1-4684-8172-e710b3552fb5" (UID: "cc8ce195-1fe1-4684-8172-e710b3552fb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.754486 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xzxn\" (UniqueName: \"kubernetes.io/projected/cc8ce195-1fe1-4684-8172-e710b3552fb5-kube-api-access-4xzxn\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.754557 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8ce195-1fe1-4684-8172-e710b3552fb5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.754590 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8ce195-1fe1-4684-8172-e710b3552fb5-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.913037 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x8sn2"] Feb 19 15:13:23 crc kubenswrapper[4810]: I0219 15:13:23.916828 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x8sn2"] Feb 19 15:13:25 crc kubenswrapper[4810]: I0219 15:13:25.275612 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-blpmq"] Feb 19 15:13:25 crc kubenswrapper[4810]: I0219 15:13:25.276151 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-blpmq" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" containerName="registry-server" containerID="cri-o://6cc469c93c48a9ab5d144e46acaa99a2b78d56e0e2fbdf938f36668cd8a62128" gracePeriod=2 Feb 19 15:13:25 crc kubenswrapper[4810]: I0219 15:13:25.450712 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc8ce195-1fe1-4684-8172-e710b3552fb5" path="/var/lib/kubelet/pods/cc8ce195-1fe1-4684-8172-e710b3552fb5/volumes" Feb 19 15:13:26 crc kubenswrapper[4810]: I0219 15:13:26.603808 4810 generic.go:334] "Generic (PLEG): container finished" podID="4127fef2-ef2b-4cc4-967d-d52dac26f314" containerID="6cc469c93c48a9ab5d144e46acaa99a2b78d56e0e2fbdf938f36668cd8a62128" exitCode=0 Feb 19 15:13:26 crc kubenswrapper[4810]: I0219 15:13:26.603861 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blpmq" event={"ID":"4127fef2-ef2b-4cc4-967d-d52dac26f314","Type":"ContainerDied","Data":"6cc469c93c48a9ab5d144e46acaa99a2b78d56e0e2fbdf938f36668cd8a62128"} Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.028225 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.189098 4810 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.189559 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" containerName="extract-utilities" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.189595 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" containerName="extract-utilities" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.189622 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8ce195-1fe1-4684-8172-e710b3552fb5" containerName="registry-server" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.189637 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8ce195-1fe1-4684-8172-e710b3552fb5" containerName="registry-server" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.189663 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8ce195-1fe1-4684-8172-e710b3552fb5" containerName="extract-utilities" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.189677 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8ce195-1fe1-4684-8172-e710b3552fb5" containerName="extract-utilities" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.189698 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" containerName="extract-content" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.189712 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" containerName="extract-content" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.189735 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8ce195-1fe1-4684-8172-e710b3552fb5" containerName="extract-content" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.189747 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8ce195-1fe1-4684-8172-e710b3552fb5" containerName="extract-content" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.189763 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" containerName="registry-server" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.189775 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" containerName="registry-server" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.190131 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" containerName="registry-server" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.190161 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc8ce195-1fe1-4684-8172-e710b3552fb5" containerName="registry-server" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.190748 4810 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.190947 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.191223 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb" gracePeriod=15 Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.191257 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef" gracePeriod=15 Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.191297 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b" gracePeriod=15 Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.191770 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e" gracePeriod=15 Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.192719 4810 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.192925 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.192939 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.192959 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.192969 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.192980 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.192989 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.193006 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.193015 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.193026 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.193035 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.193049 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.193057 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.193070 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.193081 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.193233 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.193245 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.193255 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.193267 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.193283 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.193294 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.193306 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.193493 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.193506 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.191405 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e" gracePeriod=15 Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.202096 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4127fef2-ef2b-4cc4-967d-d52dac26f314-catalog-content\") pod \"4127fef2-ef2b-4cc4-967d-d52dac26f314\" (UID: \"4127fef2-ef2b-4cc4-967d-d52dac26f314\") " Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.202292 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4127fef2-ef2b-4cc4-967d-d52dac26f314-utilities\") pod \"4127fef2-ef2b-4cc4-967d-d52dac26f314\" (UID: \"4127fef2-ef2b-4cc4-967d-d52dac26f314\") " Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.202417 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thvr6\" (UniqueName: \"kubernetes.io/projected/4127fef2-ef2b-4cc4-967d-d52dac26f314-kube-api-access-thvr6\") pod \"4127fef2-ef2b-4cc4-967d-d52dac26f314\" (UID: \"4127fef2-ef2b-4cc4-967d-d52dac26f314\") " Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.209223 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4127fef2-ef2b-4cc4-967d-d52dac26f314-utilities" (OuterVolumeSpecName: "utilities") pod "4127fef2-ef2b-4cc4-967d-d52dac26f314" (UID: "4127fef2-ef2b-4cc4-967d-d52dac26f314"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.249556 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4127fef2-ef2b-4cc4-967d-d52dac26f314-kube-api-access-thvr6" (OuterVolumeSpecName: "kube-api-access-thvr6") pod "4127fef2-ef2b-4cc4-967d-d52dac26f314" (UID: "4127fef2-ef2b-4cc4-967d-d52dac26f314"). InnerVolumeSpecName "kube-api-access-thvr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.291831 4810 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.301285 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4127fef2-ef2b-4cc4-967d-d52dac26f314-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4127fef2-ef2b-4cc4-967d-d52dac26f314" (UID: "4127fef2-ef2b-4cc4-967d-d52dac26f314"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.305052 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.305132 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.305173 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.305215 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.305256 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.305284 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.305362 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.305429 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.305511 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thvr6\" (UniqueName: \"kubernetes.io/projected/4127fef2-ef2b-4cc4-967d-d52dac26f314-kube-api-access-thvr6\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.305530 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4127fef2-ef2b-4cc4-967d-d52dac26f314-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.305544 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4127fef2-ef2b-4cc4-967d-d52dac26f314-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406443 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406537 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406570 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406608 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406574 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406665 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406699 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406737 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406771 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406777 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406734 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406835 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406845 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406719 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406820 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.406889 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.592685 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.614681 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.617420 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.618994 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e" exitCode=0 Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.619040 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b" exitCode=0 Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.619057 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef" exitCode=0 Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.619074 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e" exitCode=2 Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.619162 4810 scope.go:117] "RemoveContainer" containerID="640d02443d7d20048695866b4db1d1fdd6469c712a804010860ca4c730f59c65" Feb 19 15:13:27 crc kubenswrapper[4810]: W0219 15:13:27.619164 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-c202c6fe7155bb03a73466e079dfd207307a44f3613fdc0586849f7cd5739121 WatchSource:0}: Error finding container c202c6fe7155bb03a73466e079dfd207307a44f3613fdc0586849f7cd5739121: Status 404 returned error can't find the container with id c202c6fe7155bb03a73466e079dfd207307a44f3613fdc0586849f7cd5739121 Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.622521 4810 generic.go:334] "Generic (PLEG): container finished" podID="3788d870-2889-4190-9675-e4da44f69a71" containerID="df933f73e8a108c6e7ee4ac92776be1551ecd1fa292247fa01da1f7c1d90d486" exitCode=0 Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.622616 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3788d870-2889-4190-9675-e4da44f69a71","Type":"ContainerDied","Data":"df933f73e8a108c6e7ee4ac92776be1551ecd1fa292247fa01da1f7c1d90d486"} Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.623410 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.626919 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blpmq" event={"ID":"4127fef2-ef2b-4cc4-967d-d52dac26f314","Type":"ContainerDied","Data":"4442c4b83b02a776b63ed28285fde96beca3df86a20a033d8feb27311a4298e1"} Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.627041 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blpmq" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.628457 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.628958 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:27 crc kubenswrapper[4810]: E0219 15:13:27.630201 4810 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895ae9aa35d2a3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 15:13:27.629146682 +0000 UTC m=+237.111176826,LastTimestamp:2026-02-19 15:13:27.629146682 +0000 UTC m=+237.111176826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.631869 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.632265 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.651286 4810 scope.go:117] "RemoveContainer" containerID="6cc469c93c48a9ab5d144e46acaa99a2b78d56e0e2fbdf938f36668cd8a62128" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.674841 4810 scope.go:117] "RemoveContainer" containerID="b6b451b278f70ee6f85bbefa846a37e02c57266a0a01e80c05241319be84555e" Feb 19 15:13:27 crc kubenswrapper[4810]: I0219 15:13:27.691974 4810 scope.go:117] "RemoveContainer" containerID="01e828c8531030a2de34cc2f43410bf4a69a3af4e19f1adb2dbd098f1f78eca6" Feb 19 15:13:28 crc kubenswrapper[4810]: I0219 15:13:28.640026 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a2157c6845332b692ab9ff67fd6e2eb4fc93b35babe6d2954b26b7839c430dbe"} Feb 19 15:13:28 crc kubenswrapper[4810]: I0219 15:13:28.640741 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c202c6fe7155bb03a73466e079dfd207307a44f3613fdc0586849f7cd5739121"} Feb 19 15:13:28 crc kubenswrapper[4810]: I0219 15:13:28.641845 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:28 crc kubenswrapper[4810]: I0219 15:13:28.642499 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:28 crc kubenswrapper[4810]: E0219 15:13:28.642847 4810 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:13:28 crc kubenswrapper[4810]: I0219 15:13:28.644199 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.031399 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.032779 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.033650 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.233546 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3788d870-2889-4190-9675-e4da44f69a71-kubelet-dir\") pod \"3788d870-2889-4190-9675-e4da44f69a71\" (UID: \"3788d870-2889-4190-9675-e4da44f69a71\") " Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.233648 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3788d870-2889-4190-9675-e4da44f69a71-var-lock\") pod \"3788d870-2889-4190-9675-e4da44f69a71\" (UID: \"3788d870-2889-4190-9675-e4da44f69a71\") " Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.233657 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3788d870-2889-4190-9675-e4da44f69a71-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3788d870-2889-4190-9675-e4da44f69a71" (UID: "3788d870-2889-4190-9675-e4da44f69a71"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.233705 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3788d870-2889-4190-9675-e4da44f69a71-kube-api-access\") pod \"3788d870-2889-4190-9675-e4da44f69a71\" (UID: \"3788d870-2889-4190-9675-e4da44f69a71\") " Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.233739 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3788d870-2889-4190-9675-e4da44f69a71-var-lock" (OuterVolumeSpecName: "var-lock") pod "3788d870-2889-4190-9675-e4da44f69a71" (UID: "3788d870-2889-4190-9675-e4da44f69a71"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.233983 4810 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3788d870-2889-4190-9675-e4da44f69a71-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.233998 4810 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3788d870-2889-4190-9675-e4da44f69a71-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.239097 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3788d870-2889-4190-9675-e4da44f69a71-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3788d870-2889-4190-9675-e4da44f69a71" (UID: "3788d870-2889-4190-9675-e4da44f69a71"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.344281 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3788d870-2889-4190-9675-e4da44f69a71-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.645577 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.646750 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.647240 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.647610 4810 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.648590 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.657388 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.658745 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb" exitCode=0 Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.658847 4810 scope.go:117] "RemoveContainer" containerID="ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.658919 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.661672 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3788d870-2889-4190-9675-e4da44f69a71","Type":"ContainerDied","Data":"edeb9d3a80185cfad5f2d137a73e03675b14c1a6f2b489fec04d11b503684967"} Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.661713 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edeb9d3a80185cfad5f2d137a73e03675b14c1a6f2b489fec04d11b503684967" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.661803 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.667087 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.667676 4810 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.668198 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.685620 4810 scope.go:117] "RemoveContainer" containerID="77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.701051 4810 scope.go:117] "RemoveContainer" containerID="42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.717836 4810 scope.go:117] "RemoveContainer" containerID="60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.732542 4810 scope.go:117] "RemoveContainer" containerID="16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.749194 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.749273 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.749351 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.749384 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.749476 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.749603 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.749772 4810 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.749795 4810 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.749808 4810 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.755002 4810 scope.go:117] "RemoveContainer" containerID="875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.778456 4810 scope.go:117] "RemoveContainer" containerID="ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e" Feb 19 15:13:29 crc kubenswrapper[4810]: E0219 15:13:29.779107 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\": container with ID starting with ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e not found: ID does not exist" containerID="ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.779194 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e"} err="failed to get container status \"ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\": rpc error: code = NotFound desc = could not find container \"ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e\": container with ID starting with ef740fa90b279363e303fb824188d0a26097b06aa8eaa488c0c800a8caa8f10e not found: ID does not exist" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.779243 4810 scope.go:117] "RemoveContainer" containerID="77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b" Feb 19 15:13:29 crc kubenswrapper[4810]: E0219 15:13:29.779742 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\": container with ID starting with 77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b not found: ID does not exist" containerID="77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.779780 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b"} err="failed to get container status \"77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\": rpc error: code = NotFound desc = could not find container \"77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b\": container with ID starting with 77137e37c7a8cdddbba21456e942c760fb2cc6c31533e3bc6c4561929b41393b not found: ID does not exist" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.779809 4810 scope.go:117] "RemoveContainer" containerID="42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef" Feb 19 15:13:29 crc kubenswrapper[4810]: E0219 15:13:29.780547 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\": container with ID starting with 42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef not found: ID does not exist" containerID="42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.780611 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef"} err="failed to get container status \"42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\": rpc error: code = NotFound desc = could not find container \"42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef\": container with ID starting with 42939a1750dd650a11f4f83812694248cc8767b040f5cab76ee64d30ad6004ef not found: ID does not exist" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.780650 4810 scope.go:117] "RemoveContainer" containerID="60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e" Feb 19 15:13:29 crc kubenswrapper[4810]: E0219 15:13:29.781692 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\": container with ID starting with 60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e not found: ID does not exist" containerID="60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.781731 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e"} err="failed to get container status \"60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\": rpc error: code = NotFound desc = could not find container \"60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e\": container with ID starting with 60a3707767fcc9c9ee6a808f9c0598457f1710b6394ecf9910f7a1a0ddd8ec1e not found: ID does not exist" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.781753 4810 scope.go:117] "RemoveContainer" containerID="16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb" Feb 19 15:13:29 crc kubenswrapper[4810]: E0219 15:13:29.782200 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\": container with ID starting with 16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb not found: ID does not exist" containerID="16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.782246 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb"} err="failed to get container status \"16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\": rpc error: code = NotFound desc = could not find container \"16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb\": container with ID starting with 16713d77989f991d6b3136c41aac696aaa3233a0f3454bcd299cd968f46872eb not found: ID does not exist" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.782445 4810 scope.go:117] "RemoveContainer" containerID="875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f" Feb 19 15:13:29 crc kubenswrapper[4810]: E0219 15:13:29.782906 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\": container with ID starting with 875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f not found: ID does not exist" containerID="875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f" Feb 19 15:13:29 crc kubenswrapper[4810]: I0219 15:13:29.782936 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f"} err="failed to get container status \"875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\": rpc error: code = NotFound desc = could not find container \"875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f\": container with ID starting with 875ed8333c1f444f5418ec6900e7d62c687cf7441e1be23a7e8138c9c721fb7f not found: ID does not exist" Feb 19 15:13:30 crc kubenswrapper[4810]: I0219 15:13:30.014225 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:30 crc kubenswrapper[4810]: I0219 15:13:30.014854 4810 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:30 crc kubenswrapper[4810]: I0219 15:13:30.015455 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:31 crc kubenswrapper[4810]: E0219 15:13:31.357221 4810 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895ae9aa35d2a3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 15:13:27.629146682 +0000 UTC m=+237.111176826,LastTimestamp:2026-02-19 15:13:27.629146682 +0000 UTC m=+237.111176826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 15:13:31 crc kubenswrapper[4810]: I0219 15:13:31.444897 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:31 crc kubenswrapper[4810]: I0219 15:13:31.445495 4810 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:31 crc kubenswrapper[4810]: I0219 15:13:31.446233 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:31 crc kubenswrapper[4810]: I0219 15:13:31.447709 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 15:13:32 crc kubenswrapper[4810]: E0219 15:13:32.032936 4810 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:32 crc kubenswrapper[4810]: E0219 15:13:32.033625 4810 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:32 crc kubenswrapper[4810]: E0219 15:13:32.034476 4810 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:32 crc kubenswrapper[4810]: E0219 15:13:32.035079 4810 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:32 crc kubenswrapper[4810]: E0219 15:13:32.035529 4810 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:32 crc kubenswrapper[4810]: I0219 15:13:32.035582 4810 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 15:13:32 crc kubenswrapper[4810]: E0219 15:13:32.036109 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="200ms" Feb 19 15:13:32 crc kubenswrapper[4810]: E0219 15:13:32.236893 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="400ms" Feb 19 15:13:32 crc kubenswrapper[4810]: E0219 15:13:32.638529 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="800ms" Feb 19 15:13:33 crc kubenswrapper[4810]: E0219 15:13:33.439675 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="1.6s" Feb 19 15:13:35 crc kubenswrapper[4810]: E0219 15:13:35.041987 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="3.2s" Feb 19 15:13:35 crc kubenswrapper[4810]: E0219 15:13:35.446171 4810 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" volumeName="registry-storage" Feb 19 15:13:37 crc kubenswrapper[4810]: I0219 15:13:37.722380 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" containerName="oauth-openshift" containerID="cri-o://1540e32985c892982c268c96e96e5e47cb3be178ce36f945dc12ccf2635f82d2" gracePeriod=15 Feb 19 15:13:38 crc kubenswrapper[4810]: E0219 15:13:38.243445 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="6.4s" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.365056 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.365704 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.366093 4810 status_manager.go:851] "Failed to get status for pod" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r74mv\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.366490 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501575 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-error\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501621 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-audit-policies\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501646 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-service-ca\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501667 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-login\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501693 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-idp-0-file-data\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501712 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-audit-dir\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501729 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-router-certs\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501760 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-ocp-branding-template\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501786 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-serving-cert\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501806 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-cliconfig\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501836 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-session\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501859 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxw8d\" (UniqueName: \"kubernetes.io/projected/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-kube-api-access-zxw8d\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501887 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-trusted-ca-bundle\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.501908 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-provider-selection\") pod \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\" (UID: \"c18fb461-ce5b-43ad-85ca-305c3f8a7d46\") " Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.502809 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.503124 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.503414 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.503959 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.504135 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.509294 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.510266 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.511841 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.512033 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.512154 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.513127 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-kube-api-access-zxw8d" (OuterVolumeSpecName: "kube-api-access-zxw8d") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "kube-api-access-zxw8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.517259 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.518204 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.518894 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c18fb461-ce5b-43ad-85ca-305c3f8a7d46" (UID: "c18fb461-ce5b-43ad-85ca-305c3f8a7d46"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603197 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603288 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603306 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603345 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603357 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxw8d\" (UniqueName: \"kubernetes.io/projected/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-kube-api-access-zxw8d\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603366 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603377 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603389 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603398 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603444 4810 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603454 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603466 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603474 4810 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.603483 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c18fb461-ce5b-43ad-85ca-305c3f8a7d46-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.729910 4810 generic.go:334] "Generic (PLEG): container finished" podID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" containerID="1540e32985c892982c268c96e96e5e47cb3be178ce36f945dc12ccf2635f82d2" exitCode=0 Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.729995 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" event={"ID":"c18fb461-ce5b-43ad-85ca-305c3f8a7d46","Type":"ContainerDied","Data":"1540e32985c892982c268c96e96e5e47cb3be178ce36f945dc12ccf2635f82d2"} Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.730049 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" event={"ID":"c18fb461-ce5b-43ad-85ca-305c3f8a7d46","Type":"ContainerDied","Data":"a5b8d6a2012cb01f6278524103e645fa596a71b16bda554c88859e183269d288"} Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.730077 4810 scope.go:117] "RemoveContainer" containerID="1540e32985c892982c268c96e96e5e47cb3be178ce36f945dc12ccf2635f82d2" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.730085 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.730702 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.731115 4810 status_manager.go:851] "Failed to get status for pod" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r74mv\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.731841 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.746553 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.747725 4810 status_manager.go:851] "Failed to get status for pod" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r74mv\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.747949 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.756776 4810 scope.go:117] "RemoveContainer" containerID="1540e32985c892982c268c96e96e5e47cb3be178ce36f945dc12ccf2635f82d2" Feb 19 15:13:38 crc kubenswrapper[4810]: E0219 15:13:38.757385 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1540e32985c892982c268c96e96e5e47cb3be178ce36f945dc12ccf2635f82d2\": container with ID starting with 1540e32985c892982c268c96e96e5e47cb3be178ce36f945dc12ccf2635f82d2 not found: ID does not exist" containerID="1540e32985c892982c268c96e96e5e47cb3be178ce36f945dc12ccf2635f82d2" Feb 19 15:13:38 crc kubenswrapper[4810]: I0219 15:13:38.757437 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1540e32985c892982c268c96e96e5e47cb3be178ce36f945dc12ccf2635f82d2"} err="failed to get container status \"1540e32985c892982c268c96e96e5e47cb3be178ce36f945dc12ccf2635f82d2\": rpc error: code = NotFound desc = could not find container \"1540e32985c892982c268c96e96e5e47cb3be178ce36f945dc12ccf2635f82d2\": container with ID starting with 1540e32985c892982c268c96e96e5e47cb3be178ce36f945dc12ccf2635f82d2 not found: ID does not exist" Feb 19 15:13:40 crc kubenswrapper[4810]: I0219 15:13:40.438473 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:40 crc kubenswrapper[4810]: I0219 15:13:40.439443 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:40 crc kubenswrapper[4810]: I0219 15:13:40.440181 4810 status_manager.go:851] "Failed to get status for pod" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r74mv\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:40 crc kubenswrapper[4810]: I0219 15:13:40.441057 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:40 crc kubenswrapper[4810]: I0219 15:13:40.461605 4810 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6694b0d-3264-43eb-bd52-5088c7d2bf15" Feb 19 15:13:40 crc kubenswrapper[4810]: I0219 15:13:40.461647 4810 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6694b0d-3264-43eb-bd52-5088c7d2bf15" Feb 19 15:13:40 crc kubenswrapper[4810]: E0219 15:13:40.462312 4810 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:40 crc kubenswrapper[4810]: I0219 15:13:40.463074 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:40 crc kubenswrapper[4810]: W0219 15:13:40.487845 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-b95b27a53792cf48c31d3de79d5da1f11dd0122feff8ae7f33a11b7e86a1c396 WatchSource:0}: Error finding container b95b27a53792cf48c31d3de79d5da1f11dd0122feff8ae7f33a11b7e86a1c396: Status 404 returned error can't find the container with id b95b27a53792cf48c31d3de79d5da1f11dd0122feff8ae7f33a11b7e86a1c396 Feb 19 15:13:40 crc kubenswrapper[4810]: I0219 15:13:40.751292 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b95b27a53792cf48c31d3de79d5da1f11dd0122feff8ae7f33a11b7e86a1c396"} Feb 19 15:13:41 crc kubenswrapper[4810]: E0219 15:13:41.359835 4810 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895ae9aa35d2a3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 15:13:27.629146682 +0000 UTC m=+237.111176826,LastTimestamp:2026-02-19 15:13:27.629146682 +0000 UTC m=+237.111176826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.449347 4810 status_manager.go:851] "Failed to get status for pod" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r74mv\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.450266 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.450973 4810 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.451516 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.759004 4810 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2edba90b5013aedd5164f7df8b4ee66d180c2d853b109ba9560f61512a8fbd83" exitCode=0 Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.759454 4810 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6694b0d-3264-43eb-bd52-5088c7d2bf15" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.759496 4810 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6694b0d-3264-43eb-bd52-5088c7d2bf15" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.759128 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2edba90b5013aedd5164f7df8b4ee66d180c2d853b109ba9560f61512a8fbd83"} Feb 19 15:13:41 crc kubenswrapper[4810]: E0219 15:13:41.760186 4810 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.760202 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.760680 4810 status_manager.go:851] "Failed to get status for pod" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r74mv\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.761062 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.761670 4810 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.763312 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.763395 4810 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c" exitCode=1 Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.763436 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c"} Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.764022 4810 scope.go:117] "RemoveContainer" containerID="89e969496cd5adab974a5d8b8c3051bcdfb63c582b8f0d6d9af233d25e2c1d3c" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.764202 4810 status_manager.go:851] "Failed to get status for pod" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" pod="openshift-marketplace/certified-operators-blpmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-blpmq\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.764756 4810 status_manager.go:851] "Failed to get status for pod" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" pod="openshift-authentication/oauth-openshift-558db77b4-r74mv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-r74mv\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.765049 4810 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.765430 4810 status_manager.go:851] "Failed to get status for pod" podUID="3788d870-2889-4190-9675-e4da44f69a71" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:41 crc kubenswrapper[4810]: I0219 15:13:41.765800 4810 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Feb 19 15:13:42 crc kubenswrapper[4810]: I0219 15:13:42.775972 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ecb4dc2953fc0dd2b18030ccd9aa2495be814d5a7c10f3db03d0199658da5674"} Feb 19 15:13:42 crc kubenswrapper[4810]: I0219 15:13:42.776403 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"932073081c0f47ba7edb3b4806f86d49553956625ae2cf6e09e6c9c7e82f1408"} Feb 19 15:13:42 crc kubenswrapper[4810]: I0219 15:13:42.776416 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"82641ab373c5a3f584b9718a4e1076acabf3c6107eb17711cf62bcec7a3379c6"} Feb 19 15:13:42 crc kubenswrapper[4810]: I0219 15:13:42.776426 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"951401d9d5a5d6a99f842ad8a4cd8ad2ff73fd645878c2ccfed4f28e20399ff9"} Feb 19 15:13:42 crc kubenswrapper[4810]: I0219 15:13:42.788975 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 15:13:42 crc kubenswrapper[4810]: I0219 15:13:42.789039 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"04741ed0fdbbb3980f28a8f3763ed31736522fe98f071b4a91c2caf991f00ce4"} Feb 19 15:13:43 crc kubenswrapper[4810]: I0219 15:13:43.538438 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:13:43 crc kubenswrapper[4810]: I0219 15:13:43.538797 4810 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 15:13:43 crc kubenswrapper[4810]: I0219 15:13:43.539120 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 15:13:43 crc kubenswrapper[4810]: I0219 15:13:43.798196 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a675a26c30628492c2daff65ad50468a5356c241a3c7ad66e33b38ee05c2e5ca"} Feb 19 15:13:43 crc kubenswrapper[4810]: I0219 15:13:43.798368 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:43 crc kubenswrapper[4810]: I0219 15:13:43.798493 4810 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6694b0d-3264-43eb-bd52-5088c7d2bf15" Feb 19 15:13:43 crc kubenswrapper[4810]: I0219 15:13:43.798520 4810 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6694b0d-3264-43eb-bd52-5088c7d2bf15" Feb 19 15:13:45 crc kubenswrapper[4810]: I0219 15:13:45.463499 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:45 crc kubenswrapper[4810]: I0219 15:13:45.463814 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:45 crc kubenswrapper[4810]: I0219 15:13:45.479069 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:46 crc kubenswrapper[4810]: I0219 15:13:46.222581 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:13:48 crc kubenswrapper[4810]: I0219 15:13:48.814846 4810 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:49 crc kubenswrapper[4810]: I0219 15:13:49.837576 4810 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6694b0d-3264-43eb-bd52-5088c7d2bf15" Feb 19 15:13:49 crc kubenswrapper[4810]: I0219 15:13:49.837986 4810 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6694b0d-3264-43eb-bd52-5088c7d2bf15" Feb 19 15:13:49 crc kubenswrapper[4810]: I0219 15:13:49.843484 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:13:49 crc kubenswrapper[4810]: I0219 15:13:49.847828 4810 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="10ac7c28-166d-4a4e-aaf9-c7adac6a9cc5" Feb 19 15:13:50 crc kubenswrapper[4810]: I0219 15:13:50.844062 4810 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6694b0d-3264-43eb-bd52-5088c7d2bf15" Feb 19 15:13:50 crc kubenswrapper[4810]: I0219 15:13:50.844109 4810 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b6694b0d-3264-43eb-bd52-5088c7d2bf15" Feb 19 15:13:51 crc kubenswrapper[4810]: I0219 15:13:51.449641 4810 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="10ac7c28-166d-4a4e-aaf9-c7adac6a9cc5" Feb 19 15:13:53 crc kubenswrapper[4810]: I0219 15:13:53.539025 4810 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 15:13:53 crc kubenswrapper[4810]: I0219 15:13:53.539576 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 15:13:58 crc kubenswrapper[4810]: I0219 15:13:58.444015 4810 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 15:13:58 crc kubenswrapper[4810]: I0219 15:13:58.553271 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 15:13:58 crc kubenswrapper[4810]: I0219 15:13:58.692915 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 15:13:58 crc kubenswrapper[4810]: I0219 15:13:58.756499 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 15:13:59 crc kubenswrapper[4810]: I0219 15:13:59.355120 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 15:13:59 crc kubenswrapper[4810]: I0219 15:13:59.442096 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 15:13:59 crc kubenswrapper[4810]: I0219 15:13:59.693567 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 15:13:59 crc kubenswrapper[4810]: I0219 15:13:59.820802 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 15:13:59 crc kubenswrapper[4810]: I0219 15:13:59.898358 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 15:14:00 crc kubenswrapper[4810]: I0219 15:14:00.136726 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 15:14:00 crc kubenswrapper[4810]: I0219 15:14:00.376365 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 15:14:00 crc kubenswrapper[4810]: I0219 15:14:00.402970 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 15:14:00 crc kubenswrapper[4810]: I0219 15:14:00.586596 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 15:14:00 crc kubenswrapper[4810]: I0219 15:14:00.655653 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 15:14:00 crc kubenswrapper[4810]: I0219 15:14:00.877224 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 15:14:00 crc kubenswrapper[4810]: I0219 15:14:00.917575 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 15:14:01 crc kubenswrapper[4810]: I0219 15:14:01.042026 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 15:14:01 crc kubenswrapper[4810]: I0219 15:14:01.190123 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 15:14:01 crc kubenswrapper[4810]: I0219 15:14:01.682186 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 15:14:01 crc kubenswrapper[4810]: I0219 15:14:01.805997 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 15:14:01 crc kubenswrapper[4810]: I0219 15:14:01.828125 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.050046 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.134495 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.168810 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.291735 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.357350 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.366036 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.390538 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.409047 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.489261 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.568276 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.577648 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.594973 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.716170 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.831419 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.846047 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.884028 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.902747 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 15:14:02 crc kubenswrapper[4810]: I0219 15:14:02.971194 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.071741 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.107915 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.156476 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.225455 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.260995 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.302449 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.317350 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.334137 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.337072 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.357563 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.362653 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.386888 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.453148 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.463187 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.545095 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.552418 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.630368 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.637082 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.730671 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.791053 4810 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.894190 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 15:14:03 crc kubenswrapper[4810]: I0219 15:14:03.931622 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.021089 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.098408 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.112693 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.159355 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.245902 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.258013 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.426191 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.462022 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.472957 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.490729 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.543124 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.546193 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.575543 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.680828 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.692392 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.749366 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.749456 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.801277 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.856306 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 15:14:04 crc kubenswrapper[4810]: I0219 15:14:04.947978 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.031809 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.103101 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.114080 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.177119 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.395723 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.399201 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.414473 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.460381 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.528745 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.624612 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.749738 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.881699 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.892644 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.983548 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 15:14:05 crc kubenswrapper[4810]: I0219 15:14:05.994603 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.006435 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.072808 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.120220 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.134931 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.171029 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.240779 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.468220 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.521849 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.526440 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.527864 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.577488 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.598597 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.602903 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.633663 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.657854 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.691092 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.720577 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 15:14:06 crc kubenswrapper[4810]: I0219 15:14:06.794981 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.144962 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.228847 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.240004 4810 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.247731 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-r74mv","openshift-marketplace/certified-operators-blpmq","openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.247839 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.256672 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.258715 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.280119 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.280083050000002 podStartE2EDuration="19.28008305s" podCreationTimestamp="2026-02-19 15:13:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:14:07.272947747 +0000 UTC m=+276.754977871" watchObservedRunningTime="2026-02-19 15:14:07.28008305 +0000 UTC m=+276.762113224" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.371854 4810 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.446945 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4127fef2-ef2b-4cc4-967d-d52dac26f314" path="/var/lib/kubelet/pods/4127fef2-ef2b-4cc4-967d-d52dac26f314/volumes" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.447639 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" path="/var/lib/kubelet/pods/c18fb461-ce5b-43ad-85ca-305c3f8a7d46/volumes" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.489557 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.509730 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.830527 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.838211 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.846808 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.855801 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.871230 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.921753 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.946841 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 15:14:07 crc kubenswrapper[4810]: I0219 15:14:07.947573 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 15:14:08 crc kubenswrapper[4810]: I0219 15:14:08.104770 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 15:14:08 crc kubenswrapper[4810]: I0219 15:14:08.144406 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 15:14:08 crc kubenswrapper[4810]: I0219 15:14:08.167424 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 15:14:08 crc kubenswrapper[4810]: I0219 15:14:08.223795 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 15:14:08 crc kubenswrapper[4810]: I0219 15:14:08.344730 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 15:14:08 crc kubenswrapper[4810]: I0219 15:14:08.366425 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 15:14:08 crc kubenswrapper[4810]: I0219 15:14:08.469878 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 15:14:08 crc kubenswrapper[4810]: I0219 15:14:08.722673 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 15:14:08 crc kubenswrapper[4810]: I0219 15:14:08.732647 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 15:14:08 crc kubenswrapper[4810]: I0219 15:14:08.813858 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 15:14:08 crc kubenswrapper[4810]: I0219 15:14:08.842663 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 15:14:08 crc kubenswrapper[4810]: I0219 15:14:08.904722 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 15:14:08 crc kubenswrapper[4810]: I0219 15:14:08.930900 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.014174 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.019142 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.041314 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.151482 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.155166 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.205491 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.276409 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.308561 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.350371 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.377553 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.399255 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.430408 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.466059 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.519623 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.526745 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.568287 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.579919 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.659183 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.803471 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.835711 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.840225 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.851375 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.852971 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 15:14:09 crc kubenswrapper[4810]: I0219 15:14:09.888896 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.036120 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.143414 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.184431 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.185663 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.186447 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.244966 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.329887 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.389767 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.442647 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.579764 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.621770 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.642921 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.659362 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.682866 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.714372 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.774969 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.801742 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.869207 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.875106 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-85766c7959-6lnjr"] Feb 19 15:14:10 crc kubenswrapper[4810]: E0219 15:14:10.875420 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" containerName="oauth-openshift" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.875473 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" containerName="oauth-openshift" Feb 19 15:14:10 crc kubenswrapper[4810]: E0219 15:14:10.875494 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3788d870-2889-4190-9675-e4da44f69a71" containerName="installer" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.875505 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3788d870-2889-4190-9675-e4da44f69a71" containerName="installer" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.875669 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3788d870-2889-4190-9675-e4da44f69a71" containerName="installer" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.875714 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c18fb461-ce5b-43ad-85ca-305c3f8a7d46" containerName="oauth-openshift" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.876299 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.884882 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.885182 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.885204 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.885904 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.886234 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.886491 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.886692 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.887125 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.887835 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.888007 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.888159 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.891812 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.899716 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.906177 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-85766c7959-6lnjr"] Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.908065 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.916691 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.925978 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.932393 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.959847 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.960092 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-audit-policies\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.960189 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-audit-dir\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.960288 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.960430 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-user-template-login\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.960570 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-service-ca\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.960697 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-session\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.960826 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.960941 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q259\" (UniqueName: \"kubernetes.io/projected/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-kube-api-access-2q259\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.961043 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.961162 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-router-certs\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.961274 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-user-template-error\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.961431 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:10 crc kubenswrapper[4810]: I0219 15:14:10.961565 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.026555 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.062823 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.063062 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-audit-policies\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.063173 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-audit-dir\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.063254 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-audit-dir\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.063389 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.063500 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-user-template-login\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.063605 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-service-ca\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.063751 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-session\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.063866 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.063975 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q259\" (UniqueName: \"kubernetes.io/projected/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-kube-api-access-2q259\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.064485 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.064593 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-service-ca\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.064603 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-router-certs\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.064680 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-user-template-error\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.063897 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-audit-policies\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.064724 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.064756 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.065469 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-cliconfig\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.067017 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.068853 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-serving-cert\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.068962 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.070147 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-router-certs\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.070275 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-user-template-error\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.070868 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-session\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.071461 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-user-template-login\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.073400 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.086106 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.089150 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q259\" (UniqueName: \"kubernetes.io/projected/ba91d0d8-a2ad-47b1-b161-56ed570a2e42-kube-api-access-2q259\") pod \"oauth-openshift-85766c7959-6lnjr\" (UID: \"ba91d0d8-a2ad-47b1-b161-56ed570a2e42\") " pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.095915 4810 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.096116 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a2157c6845332b692ab9ff67fd6e2eb4fc93b35babe6d2954b26b7839c430dbe" gracePeriod=5 Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.117815 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.127490 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.185771 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.209157 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.209263 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.320797 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.349852 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.362543 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.422386 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.453436 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.531300 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.544204 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.582964 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.602142 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-85766c7959-6lnjr"] Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.685608 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.740588 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.817011 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.837949 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.870514 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.876172 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.881036 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.968888 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.994605 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" event={"ID":"ba91d0d8-a2ad-47b1-b161-56ed570a2e42","Type":"ContainerStarted","Data":"f3b919ca6117bba187eaeeb48421608b19352c475498cf813f24b9f89e02a35d"} Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.994731 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" event={"ID":"ba91d0d8-a2ad-47b1-b161-56ed570a2e42","Type":"ContainerStarted","Data":"6bf5c788ebd52253ff52bd60a644388aa228fe2450514654184215f0a929b3df"} Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.995062 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.996827 4810 patch_prober.go:28] interesting pod/oauth-openshift-85766c7959-6lnjr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" start-of-body= Feb 19 15:14:11 crc kubenswrapper[4810]: I0219 15:14:11.996892 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" podUID="ba91d0d8-a2ad-47b1-b161-56ed570a2e42" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" Feb 19 15:14:12 crc kubenswrapper[4810]: I0219 15:14:12.022412 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" podStartSLOduration=60.022391179 podStartE2EDuration="1m0.022391179s" podCreationTimestamp="2026-02-19 15:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:14:12.019233158 +0000 UTC m=+281.501263292" watchObservedRunningTime="2026-02-19 15:14:12.022391179 +0000 UTC m=+281.504421323" Feb 19 15:14:12 crc kubenswrapper[4810]: I0219 15:14:12.035354 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 15:14:12 crc kubenswrapper[4810]: I0219 15:14:12.120875 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 15:14:12 crc kubenswrapper[4810]: I0219 15:14:12.176725 4810 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 15:14:12 crc kubenswrapper[4810]: I0219 15:14:12.257500 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 15:14:12 crc kubenswrapper[4810]: I0219 15:14:12.532739 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 15:14:12 crc kubenswrapper[4810]: I0219 15:14:12.559862 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 15:14:12 crc kubenswrapper[4810]: I0219 15:14:12.683451 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 15:14:12 crc kubenswrapper[4810]: I0219 15:14:12.769318 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 15:14:12 crc kubenswrapper[4810]: I0219 15:14:12.891688 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 15:14:12 crc kubenswrapper[4810]: I0219 15:14:12.924665 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 15:14:12 crc kubenswrapper[4810]: I0219 15:14:12.945856 4810 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.000794 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.001128 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.004427 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-85766c7959-6lnjr_ba91d0d8-a2ad-47b1-b161-56ed570a2e42/oauth-openshift/0.log" Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.004488 4810 generic.go:334] "Generic (PLEG): container finished" podID="ba91d0d8-a2ad-47b1-b161-56ed570a2e42" containerID="f3b919ca6117bba187eaeeb48421608b19352c475498cf813f24b9f89e02a35d" exitCode=255 Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.004533 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" event={"ID":"ba91d0d8-a2ad-47b1-b161-56ed570a2e42","Type":"ContainerDied","Data":"f3b919ca6117bba187eaeeb48421608b19352c475498cf813f24b9f89e02a35d"} Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.005700 4810 scope.go:117] "RemoveContainer" containerID="f3b919ca6117bba187eaeeb48421608b19352c475498cf813f24b9f89e02a35d" Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.075253 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.190583 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.206500 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.222558 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.254969 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.282691 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.638275 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.715081 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 15:14:13 crc kubenswrapper[4810]: I0219 15:14:13.964697 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 15:14:14 crc kubenswrapper[4810]: I0219 15:14:14.019979 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-85766c7959-6lnjr_ba91d0d8-a2ad-47b1-b161-56ed570a2e42/oauth-openshift/0.log" Feb 19 15:14:14 crc kubenswrapper[4810]: I0219 15:14:14.020038 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" event={"ID":"ba91d0d8-a2ad-47b1-b161-56ed570a2e42","Type":"ContainerStarted","Data":"b3371b30221678523b2083e3d3c9fc5858d7f2f0256abf4e521e9f1dc8d9aad2"} Feb 19 15:14:14 crc kubenswrapper[4810]: I0219 15:14:14.020423 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:14 crc kubenswrapper[4810]: I0219 15:14:14.023463 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 15:14:14 crc kubenswrapper[4810]: I0219 15:14:14.025626 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-85766c7959-6lnjr" Feb 19 15:14:14 crc kubenswrapper[4810]: I0219 15:14:14.182200 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 15:14:14 crc kubenswrapper[4810]: I0219 15:14:14.602475 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 15:14:14 crc kubenswrapper[4810]: I0219 15:14:14.617002 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 15:14:14 crc kubenswrapper[4810]: I0219 15:14:14.683906 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 15:14:14 crc kubenswrapper[4810]: I0219 15:14:14.732198 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 15:14:14 crc kubenswrapper[4810]: I0219 15:14:14.739641 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 15:14:15 crc kubenswrapper[4810]: I0219 15:14:15.310012 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 15:14:15 crc kubenswrapper[4810]: I0219 15:14:15.508637 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.694026 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.694106 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.836411 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.836461 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.836480 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.836560 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.836551 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.836606 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.836643 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.836629 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.836692 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.837233 4810 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.837271 4810 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.837294 4810 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.837312 4810 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.849572 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:14:16 crc kubenswrapper[4810]: I0219 15:14:16.938663 4810 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 15:14:17 crc kubenswrapper[4810]: I0219 15:14:17.039570 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 15:14:17 crc kubenswrapper[4810]: I0219 15:14:17.039641 4810 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a2157c6845332b692ab9ff67fd6e2eb4fc93b35babe6d2954b26b7839c430dbe" exitCode=137 Feb 19 15:14:17 crc kubenswrapper[4810]: I0219 15:14:17.039692 4810 scope.go:117] "RemoveContainer" containerID="a2157c6845332b692ab9ff67fd6e2eb4fc93b35babe6d2954b26b7839c430dbe" Feb 19 15:14:17 crc kubenswrapper[4810]: I0219 15:14:17.039751 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 15:14:17 crc kubenswrapper[4810]: I0219 15:14:17.059996 4810 scope.go:117] "RemoveContainer" containerID="a2157c6845332b692ab9ff67fd6e2eb4fc93b35babe6d2954b26b7839c430dbe" Feb 19 15:14:17 crc kubenswrapper[4810]: E0219 15:14:17.060477 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2157c6845332b692ab9ff67fd6e2eb4fc93b35babe6d2954b26b7839c430dbe\": container with ID starting with a2157c6845332b692ab9ff67fd6e2eb4fc93b35babe6d2954b26b7839c430dbe not found: ID does not exist" containerID="a2157c6845332b692ab9ff67fd6e2eb4fc93b35babe6d2954b26b7839c430dbe" Feb 19 15:14:17 crc kubenswrapper[4810]: I0219 15:14:17.060517 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2157c6845332b692ab9ff67fd6e2eb4fc93b35babe6d2954b26b7839c430dbe"} err="failed to get container status \"a2157c6845332b692ab9ff67fd6e2eb4fc93b35babe6d2954b26b7839c430dbe\": rpc error: code = NotFound desc = could not find container \"a2157c6845332b692ab9ff67fd6e2eb4fc93b35babe6d2954b26b7839c430dbe\": container with ID starting with a2157c6845332b692ab9ff67fd6e2eb4fc93b35babe6d2954b26b7839c430dbe not found: ID does not exist" Feb 19 15:14:17 crc kubenswrapper[4810]: I0219 15:14:17.078223 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 15:14:17 crc kubenswrapper[4810]: I0219 15:14:17.451962 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 15:14:31 crc kubenswrapper[4810]: I0219 15:14:31.158988 4810 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 15:14:32 crc kubenswrapper[4810]: I0219 15:14:32.140126 4810 generic.go:334] "Generic (PLEG): container finished" podID="b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" containerID="20e10c5e198ddee65262d0528102f59529d30ad7bead31c5ab5c764fe94b9de0" exitCode=0 Feb 19 15:14:32 crc kubenswrapper[4810]: I0219 15:14:32.140169 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" event={"ID":"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9","Type":"ContainerDied","Data":"20e10c5e198ddee65262d0528102f59529d30ad7bead31c5ab5c764fe94b9de0"} Feb 19 15:14:32 crc kubenswrapper[4810]: I0219 15:14:32.140622 4810 scope.go:117] "RemoveContainer" containerID="20e10c5e198ddee65262d0528102f59529d30ad7bead31c5ab5c764fe94b9de0" Feb 19 15:14:33 crc kubenswrapper[4810]: I0219 15:14:33.151432 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" event={"ID":"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9","Type":"ContainerStarted","Data":"18198b3c1f0a5d1d83b301a87f12dccc9fde533f7e0587694e992331246f2d37"} Feb 19 15:14:33 crc kubenswrapper[4810]: I0219 15:14:33.152971 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:14:33 crc kubenswrapper[4810]: I0219 15:14:33.155934 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.169458 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg"] Feb 19 15:15:00 crc kubenswrapper[4810]: E0219 15:15:00.170228 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.170243 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.170377 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.170811 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.172887 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.173674 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.180668 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg"] Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.325840 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f3fa539-f490-4b25-b592-d199cc757b8a-config-volume\") pod \"collect-profiles-29525235-vk2jg\" (UID: \"2f3fa539-f490-4b25-b592-d199cc757b8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.326065 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncgrd\" (UniqueName: \"kubernetes.io/projected/2f3fa539-f490-4b25-b592-d199cc757b8a-kube-api-access-ncgrd\") pod \"collect-profiles-29525235-vk2jg\" (UID: \"2f3fa539-f490-4b25-b592-d199cc757b8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.326127 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f3fa539-f490-4b25-b592-d199cc757b8a-secret-volume\") pod \"collect-profiles-29525235-vk2jg\" (UID: \"2f3fa539-f490-4b25-b592-d199cc757b8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.428214 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f3fa539-f490-4b25-b592-d199cc757b8a-config-volume\") pod \"collect-profiles-29525235-vk2jg\" (UID: \"2f3fa539-f490-4b25-b592-d199cc757b8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.428697 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncgrd\" (UniqueName: \"kubernetes.io/projected/2f3fa539-f490-4b25-b592-d199cc757b8a-kube-api-access-ncgrd\") pod \"collect-profiles-29525235-vk2jg\" (UID: \"2f3fa539-f490-4b25-b592-d199cc757b8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.428772 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f3fa539-f490-4b25-b592-d199cc757b8a-secret-volume\") pod \"collect-profiles-29525235-vk2jg\" (UID: \"2f3fa539-f490-4b25-b592-d199cc757b8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.429864 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f3fa539-f490-4b25-b592-d199cc757b8a-config-volume\") pod \"collect-profiles-29525235-vk2jg\" (UID: \"2f3fa539-f490-4b25-b592-d199cc757b8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.435078 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f3fa539-f490-4b25-b592-d199cc757b8a-secret-volume\") pod \"collect-profiles-29525235-vk2jg\" (UID: \"2f3fa539-f490-4b25-b592-d199cc757b8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.450089 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncgrd\" (UniqueName: \"kubernetes.io/projected/2f3fa539-f490-4b25-b592-d199cc757b8a-kube-api-access-ncgrd\") pod \"collect-profiles-29525235-vk2jg\" (UID: \"2f3fa539-f490-4b25-b592-d199cc757b8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.486927 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" Feb 19 15:15:00 crc kubenswrapper[4810]: I0219 15:15:00.984175 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg"] Feb 19 15:15:00 crc kubenswrapper[4810]: W0219 15:15:00.992569 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f3fa539_f490_4b25_b592_d199cc757b8a.slice/crio-7a4255db1170dcd73c8818f18b999b4830ae54d545bb0ede34e81f9454503447 WatchSource:0}: Error finding container 7a4255db1170dcd73c8818f18b999b4830ae54d545bb0ede34e81f9454503447: Status 404 returned error can't find the container with id 7a4255db1170dcd73c8818f18b999b4830ae54d545bb0ede34e81f9454503447 Feb 19 15:15:01 crc kubenswrapper[4810]: I0219 15:15:01.335430 4810 generic.go:334] "Generic (PLEG): container finished" podID="2f3fa539-f490-4b25-b592-d199cc757b8a" containerID="88bb50e9f73c36f14be470f12c634b9e24a853c43e328fa253f758c7a485a40a" exitCode=0 Feb 19 15:15:01 crc kubenswrapper[4810]: I0219 15:15:01.335519 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" event={"ID":"2f3fa539-f490-4b25-b592-d199cc757b8a","Type":"ContainerDied","Data":"88bb50e9f73c36f14be470f12c634b9e24a853c43e328fa253f758c7a485a40a"} Feb 19 15:15:01 crc kubenswrapper[4810]: I0219 15:15:01.335556 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" event={"ID":"2f3fa539-f490-4b25-b592-d199cc757b8a","Type":"ContainerStarted","Data":"7a4255db1170dcd73c8818f18b999b4830ae54d545bb0ede34e81f9454503447"} Feb 19 15:15:02 crc kubenswrapper[4810]: I0219 15:15:02.687709 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" Feb 19 15:15:02 crc kubenswrapper[4810]: I0219 15:15:02.872762 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncgrd\" (UniqueName: \"kubernetes.io/projected/2f3fa539-f490-4b25-b592-d199cc757b8a-kube-api-access-ncgrd\") pod \"2f3fa539-f490-4b25-b592-d199cc757b8a\" (UID: \"2f3fa539-f490-4b25-b592-d199cc757b8a\") " Feb 19 15:15:02 crc kubenswrapper[4810]: I0219 15:15:02.872979 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f3fa539-f490-4b25-b592-d199cc757b8a-config-volume\") pod \"2f3fa539-f490-4b25-b592-d199cc757b8a\" (UID: \"2f3fa539-f490-4b25-b592-d199cc757b8a\") " Feb 19 15:15:02 crc kubenswrapper[4810]: I0219 15:15:02.873030 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f3fa539-f490-4b25-b592-d199cc757b8a-secret-volume\") pod \"2f3fa539-f490-4b25-b592-d199cc757b8a\" (UID: \"2f3fa539-f490-4b25-b592-d199cc757b8a\") " Feb 19 15:15:02 crc kubenswrapper[4810]: I0219 15:15:02.874218 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3fa539-f490-4b25-b592-d199cc757b8a-config-volume" (OuterVolumeSpecName: "config-volume") pod "2f3fa539-f490-4b25-b592-d199cc757b8a" (UID: "2f3fa539-f490-4b25-b592-d199cc757b8a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:15:02 crc kubenswrapper[4810]: I0219 15:15:02.880960 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3fa539-f490-4b25-b592-d199cc757b8a-kube-api-access-ncgrd" (OuterVolumeSpecName: "kube-api-access-ncgrd") pod "2f3fa539-f490-4b25-b592-d199cc757b8a" (UID: "2f3fa539-f490-4b25-b592-d199cc757b8a"). InnerVolumeSpecName "kube-api-access-ncgrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:15:02 crc kubenswrapper[4810]: I0219 15:15:02.881899 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f3fa539-f490-4b25-b592-d199cc757b8a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2f3fa539-f490-4b25-b592-d199cc757b8a" (UID: "2f3fa539-f490-4b25-b592-d199cc757b8a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:15:02 crc kubenswrapper[4810]: I0219 15:15:02.974496 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f3fa539-f490-4b25-b592-d199cc757b8a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:02 crc kubenswrapper[4810]: I0219 15:15:02.974558 4810 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f3fa539-f490-4b25-b592-d199cc757b8a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:02 crc kubenswrapper[4810]: I0219 15:15:02.974584 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncgrd\" (UniqueName: \"kubernetes.io/projected/2f3fa539-f490-4b25-b592-d199cc757b8a-kube-api-access-ncgrd\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:03 crc kubenswrapper[4810]: I0219 15:15:03.351077 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" event={"ID":"2f3fa539-f490-4b25-b592-d199cc757b8a","Type":"ContainerDied","Data":"7a4255db1170dcd73c8818f18b999b4830ae54d545bb0ede34e81f9454503447"} Feb 19 15:15:03 crc kubenswrapper[4810]: I0219 15:15:03.351129 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a4255db1170dcd73c8818f18b999b4830ae54d545bb0ede34e81f9454503447" Feb 19 15:15:03 crc kubenswrapper[4810]: I0219 15:15:03.351188 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg" Feb 19 15:15:19 crc kubenswrapper[4810]: I0219 15:15:19.537656 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:15:19 crc kubenswrapper[4810]: I0219 15:15:19.538465 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:15:30 crc kubenswrapper[4810]: I0219 15:15:30.856448 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-59jrk"] Feb 19 15:15:30 crc kubenswrapper[4810]: E0219 15:15:30.857339 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3fa539-f490-4b25-b592-d199cc757b8a" containerName="collect-profiles" Feb 19 15:15:30 crc kubenswrapper[4810]: I0219 15:15:30.857356 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3fa539-f490-4b25-b592-d199cc757b8a" containerName="collect-profiles" Feb 19 15:15:30 crc kubenswrapper[4810]: I0219 15:15:30.857509 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3fa539-f490-4b25-b592-d199cc757b8a" containerName="collect-profiles" Feb 19 15:15:30 crc kubenswrapper[4810]: I0219 15:15:30.858045 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:30 crc kubenswrapper[4810]: I0219 15:15:30.878410 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-59jrk"] Feb 19 15:15:30 crc kubenswrapper[4810]: I0219 15:15:30.977543 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/38318350-42a7-4d65-93b7-e11dc71e3cb6-registry-certificates\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:30 crc kubenswrapper[4810]: I0219 15:15:30.978292 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:30 crc kubenswrapper[4810]: I0219 15:15:30.978418 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/38318350-42a7-4d65-93b7-e11dc71e3cb6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:30 crc kubenswrapper[4810]: I0219 15:15:30.978583 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/38318350-42a7-4d65-93b7-e11dc71e3cb6-registry-tls\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:30 crc kubenswrapper[4810]: I0219 15:15:30.978697 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/38318350-42a7-4d65-93b7-e11dc71e3cb6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:30 crc kubenswrapper[4810]: I0219 15:15:30.978843 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52jjq\" (UniqueName: \"kubernetes.io/projected/38318350-42a7-4d65-93b7-e11dc71e3cb6-kube-api-access-52jjq\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:30 crc kubenswrapper[4810]: I0219 15:15:30.978918 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38318350-42a7-4d65-93b7-e11dc71e3cb6-bound-sa-token\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:30 crc kubenswrapper[4810]: I0219 15:15:30.978943 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38318350-42a7-4d65-93b7-e11dc71e3cb6-trusted-ca\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.005991 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.080121 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/38318350-42a7-4d65-93b7-e11dc71e3cb6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.080222 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/38318350-42a7-4d65-93b7-e11dc71e3cb6-registry-tls\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.080259 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/38318350-42a7-4d65-93b7-e11dc71e3cb6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.080296 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52jjq\" (UniqueName: \"kubernetes.io/projected/38318350-42a7-4d65-93b7-e11dc71e3cb6-kube-api-access-52jjq\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.080344 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38318350-42a7-4d65-93b7-e11dc71e3cb6-bound-sa-token\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.080366 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38318350-42a7-4d65-93b7-e11dc71e3cb6-trusted-ca\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.080396 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/38318350-42a7-4d65-93b7-e11dc71e3cb6-registry-certificates\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.080849 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/38318350-42a7-4d65-93b7-e11dc71e3cb6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.082133 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/38318350-42a7-4d65-93b7-e11dc71e3cb6-registry-certificates\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.083525 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38318350-42a7-4d65-93b7-e11dc71e3cb6-trusted-ca\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.088213 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/38318350-42a7-4d65-93b7-e11dc71e3cb6-registry-tls\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.089030 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/38318350-42a7-4d65-93b7-e11dc71e3cb6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.096989 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52jjq\" (UniqueName: \"kubernetes.io/projected/38318350-42a7-4d65-93b7-e11dc71e3cb6-kube-api-access-52jjq\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.120006 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38318350-42a7-4d65-93b7-e11dc71e3cb6-bound-sa-token\") pod \"image-registry-66df7c8f76-59jrk\" (UID: \"38318350-42a7-4d65-93b7-e11dc71e3cb6\") " pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.184738 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:31 crc kubenswrapper[4810]: I0219 15:15:31.665390 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-59jrk"] Feb 19 15:15:32 crc kubenswrapper[4810]: I0219 15:15:32.550468 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" event={"ID":"38318350-42a7-4d65-93b7-e11dc71e3cb6","Type":"ContainerStarted","Data":"ccaaf0572405ab2cfd8dd4fd5bc4d293a41400947505fd08fbf0770c54aebc23"} Feb 19 15:15:32 crc kubenswrapper[4810]: I0219 15:15:32.550811 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" event={"ID":"38318350-42a7-4d65-93b7-e11dc71e3cb6","Type":"ContainerStarted","Data":"3f5d01333689d1471f7256a087f02b21acf274ca7d33bc2fad1d60b86b76b63c"} Feb 19 15:15:32 crc kubenswrapper[4810]: I0219 15:15:32.550834 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:32 crc kubenswrapper[4810]: I0219 15:15:32.569874 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" podStartSLOduration=2.5698534779999997 podStartE2EDuration="2.569853478s" podCreationTimestamp="2026-02-19 15:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:15:32.565848537 +0000 UTC m=+362.047878661" watchObservedRunningTime="2026-02-19 15:15:32.569853478 +0000 UTC m=+362.051883602" Feb 19 15:15:49 crc kubenswrapper[4810]: I0219 15:15:49.537813 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:15:49 crc kubenswrapper[4810]: I0219 15:15:49.538784 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:15:51 crc kubenswrapper[4810]: I0219 15:15:51.194317 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-59jrk" Feb 19 15:15:51 crc kubenswrapper[4810]: I0219 15:15:51.252461 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7bnq2"] Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.425368 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rk4vw"] Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.426677 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rk4vw" podUID="3bf3315d-3d2f-4aeb-b925-c3832e102e85" containerName="registry-server" containerID="cri-o://077475bcb0b82366e9846158038585fb73fc53528d8e624159315394b6489745" gracePeriod=30 Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.459864 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5ks5"] Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.459929 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r2tqm"] Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.460233 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" podUID="b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" containerName="marketplace-operator" containerID="cri-o://18198b3c1f0a5d1d83b301a87f12dccc9fde533f7e0587694e992331246f2d37" gracePeriod=30 Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.460992 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d5ks5" podUID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" containerName="registry-server" containerID="cri-o://6d399395ed59b272e5ce80489e65062cb3d89e0840588f5165efb67b6a223708" gracePeriod=30 Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.469184 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptbh9"] Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.469615 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ptbh9" podUID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" containerName="registry-server" containerID="cri-o://3c4b22d47de57aea7d57602a01f5293afd50a4c1301de10fd9d0be527b9184bd" gracePeriod=30 Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.486258 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sm9wk"] Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.487376 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.500133 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gp8sg"] Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.500535 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gp8sg" podUID="3146bc9a-c4fc-4aa1-acae-032db4aa0582" containerName="registry-server" containerID="cri-o://f7e09ee181e03814c9bffa4e23b2c1f95c88075e138b0f5ea2eafb7e14efd0cc" gracePeriod=30 Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.512768 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sm9wk"] Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.646506 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/41d27e40-a89e-4fd6-8106-824c5a257f25-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sm9wk\" (UID: \"41d27e40-a89e-4fd6-8106-824c5a257f25\") " pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.646953 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjlm\" (UniqueName: \"kubernetes.io/projected/41d27e40-a89e-4fd6-8106-824c5a257f25-kube-api-access-wrjlm\") pod \"marketplace-operator-79b997595-sm9wk\" (UID: \"41d27e40-a89e-4fd6-8106-824c5a257f25\") " pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.646988 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41d27e40-a89e-4fd6-8106-824c5a257f25-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sm9wk\" (UID: \"41d27e40-a89e-4fd6-8106-824c5a257f25\") " pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.700080 4810 generic.go:334] "Generic (PLEG): container finished" podID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" containerID="6d399395ed59b272e5ce80489e65062cb3d89e0840588f5165efb67b6a223708" exitCode=0 Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.700163 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5ks5" event={"ID":"9a3d6b1f-2011-4f7f-bea0-1d303007fe41","Type":"ContainerDied","Data":"6d399395ed59b272e5ce80489e65062cb3d89e0840588f5165efb67b6a223708"} Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.702251 4810 generic.go:334] "Generic (PLEG): container finished" podID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" containerID="3c4b22d47de57aea7d57602a01f5293afd50a4c1301de10fd9d0be527b9184bd" exitCode=0 Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.702337 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptbh9" event={"ID":"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53","Type":"ContainerDied","Data":"3c4b22d47de57aea7d57602a01f5293afd50a4c1301de10fd9d0be527b9184bd"} Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.712410 4810 generic.go:334] "Generic (PLEG): container finished" podID="3bf3315d-3d2f-4aeb-b925-c3832e102e85" containerID="077475bcb0b82366e9846158038585fb73fc53528d8e624159315394b6489745" exitCode=0 Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.712485 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4vw" event={"ID":"3bf3315d-3d2f-4aeb-b925-c3832e102e85","Type":"ContainerDied","Data":"077475bcb0b82366e9846158038585fb73fc53528d8e624159315394b6489745"} Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.715752 4810 generic.go:334] "Generic (PLEG): container finished" podID="3146bc9a-c4fc-4aa1-acae-032db4aa0582" containerID="f7e09ee181e03814c9bffa4e23b2c1f95c88075e138b0f5ea2eafb7e14efd0cc" exitCode=0 Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.715877 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gp8sg" event={"ID":"3146bc9a-c4fc-4aa1-acae-032db4aa0582","Type":"ContainerDied","Data":"f7e09ee181e03814c9bffa4e23b2c1f95c88075e138b0f5ea2eafb7e14efd0cc"} Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.719390 4810 generic.go:334] "Generic (PLEG): container finished" podID="b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" containerID="18198b3c1f0a5d1d83b301a87f12dccc9fde533f7e0587694e992331246f2d37" exitCode=0 Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.719475 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" event={"ID":"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9","Type":"ContainerDied","Data":"18198b3c1f0a5d1d83b301a87f12dccc9fde533f7e0587694e992331246f2d37"} Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.719550 4810 scope.go:117] "RemoveContainer" containerID="20e10c5e198ddee65262d0528102f59529d30ad7bead31c5ab5c764fe94b9de0" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.749559 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/41d27e40-a89e-4fd6-8106-824c5a257f25-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sm9wk\" (UID: \"41d27e40-a89e-4fd6-8106-824c5a257f25\") " pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.749614 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrjlm\" (UniqueName: \"kubernetes.io/projected/41d27e40-a89e-4fd6-8106-824c5a257f25-kube-api-access-wrjlm\") pod \"marketplace-operator-79b997595-sm9wk\" (UID: \"41d27e40-a89e-4fd6-8106-824c5a257f25\") " pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.749644 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41d27e40-a89e-4fd6-8106-824c5a257f25-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sm9wk\" (UID: \"41d27e40-a89e-4fd6-8106-824c5a257f25\") " pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.751200 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41d27e40-a89e-4fd6-8106-824c5a257f25-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sm9wk\" (UID: \"41d27e40-a89e-4fd6-8106-824c5a257f25\") " pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.764240 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/41d27e40-a89e-4fd6-8106-824c5a257f25-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sm9wk\" (UID: \"41d27e40-a89e-4fd6-8106-824c5a257f25\") " pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.775657 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrjlm\" (UniqueName: \"kubernetes.io/projected/41d27e40-a89e-4fd6-8106-824c5a257f25-kube-api-access-wrjlm\") pod \"marketplace-operator-79b997595-sm9wk\" (UID: \"41d27e40-a89e-4fd6-8106-824c5a257f25\") " pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.931390 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.935713 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.947907 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.962940 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.969808 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:15:53 crc kubenswrapper[4810]: I0219 15:15:53.987421 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.053752 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqz6g\" (UniqueName: \"kubernetes.io/projected/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-kube-api-access-cqz6g\") pod \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\" (UID: \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.053984 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-utilities\") pod \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\" (UID: \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.054011 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-catalog-content\") pod \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\" (UID: \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.054043 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-catalog-content\") pod \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\" (UID: \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.054073 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-utilities\") pod \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\" (UID: \"9a3d6b1f-2011-4f7f-bea0-1d303007fe41\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.054097 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf3315d-3d2f-4aeb-b925-c3832e102e85-utilities\") pod \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\" (UID: \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.054115 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqzns\" (UniqueName: \"kubernetes.io/projected/3bf3315d-3d2f-4aeb-b925-c3832e102e85-kube-api-access-fqzns\") pod \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\" (UID: \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.054166 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbxf5\" (UniqueName: \"kubernetes.io/projected/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-kube-api-access-gbxf5\") pod \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\" (UID: \"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.054206 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf3315d-3d2f-4aeb-b925-c3832e102e85-catalog-content\") pod \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\" (UID: \"3bf3315d-3d2f-4aeb-b925-c3832e102e85\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.054248 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3146bc9a-c4fc-4aa1-acae-032db4aa0582-utilities\") pod \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\" (UID: \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.054280 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3146bc9a-c4fc-4aa1-acae-032db4aa0582-catalog-content\") pod \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\" (UID: \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.054302 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b45t8\" (UniqueName: \"kubernetes.io/projected/3146bc9a-c4fc-4aa1-acae-032db4aa0582-kube-api-access-b45t8\") pod \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\" (UID: \"3146bc9a-c4fc-4aa1-acae-032db4aa0582\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.056513 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-utilities" (OuterVolumeSpecName: "utilities") pod "7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" (UID: "7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.056825 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3146bc9a-c4fc-4aa1-acae-032db4aa0582-utilities" (OuterVolumeSpecName: "utilities") pod "3146bc9a-c4fc-4aa1-acae-032db4aa0582" (UID: "3146bc9a-c4fc-4aa1-acae-032db4aa0582"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.057522 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf3315d-3d2f-4aeb-b925-c3832e102e85-utilities" (OuterVolumeSpecName: "utilities") pod "3bf3315d-3d2f-4aeb-b925-c3832e102e85" (UID: "3bf3315d-3d2f-4aeb-b925-c3832e102e85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.059262 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-kube-api-access-gbxf5" (OuterVolumeSpecName: "kube-api-access-gbxf5") pod "7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" (UID: "7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53"). InnerVolumeSpecName "kube-api-access-gbxf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.062132 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-utilities" (OuterVolumeSpecName: "utilities") pod "9a3d6b1f-2011-4f7f-bea0-1d303007fe41" (UID: "9a3d6b1f-2011-4f7f-bea0-1d303007fe41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.064108 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf3315d-3d2f-4aeb-b925-c3832e102e85-kube-api-access-fqzns" (OuterVolumeSpecName: "kube-api-access-fqzns") pod "3bf3315d-3d2f-4aeb-b925-c3832e102e85" (UID: "3bf3315d-3d2f-4aeb-b925-c3832e102e85"). InnerVolumeSpecName "kube-api-access-fqzns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.065549 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3146bc9a-c4fc-4aa1-acae-032db4aa0582-kube-api-access-b45t8" (OuterVolumeSpecName: "kube-api-access-b45t8") pod "3146bc9a-c4fc-4aa1-acae-032db4aa0582" (UID: "3146bc9a-c4fc-4aa1-acae-032db4aa0582"). InnerVolumeSpecName "kube-api-access-b45t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.067593 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-kube-api-access-cqz6g" (OuterVolumeSpecName: "kube-api-access-cqz6g") pod "9a3d6b1f-2011-4f7f-bea0-1d303007fe41" (UID: "9a3d6b1f-2011-4f7f-bea0-1d303007fe41"). InnerVolumeSpecName "kube-api-access-cqz6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.083705 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" (UID: "7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.119598 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a3d6b1f-2011-4f7f-bea0-1d303007fe41" (UID: "9a3d6b1f-2011-4f7f-bea0-1d303007fe41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.128978 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf3315d-3d2f-4aeb-b925-c3832e102e85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bf3315d-3d2f-4aeb-b925-c3832e102e85" (UID: "3bf3315d-3d2f-4aeb-b925-c3832e102e85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.160558 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-marketplace-trusted-ca\") pod \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\" (UID: \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.160749 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-marketplace-operator-metrics\") pod \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\" (UID: \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.160785 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9vz7\" (UniqueName: \"kubernetes.io/projected/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-kube-api-access-q9vz7\") pod \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\" (UID: \"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9\") " Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.161093 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbxf5\" (UniqueName: \"kubernetes.io/projected/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-kube-api-access-gbxf5\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.161124 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf3315d-3d2f-4aeb-b925-c3832e102e85-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.161139 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3146bc9a-c4fc-4aa1-acae-032db4aa0582-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.161152 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b45t8\" (UniqueName: \"kubernetes.io/projected/3146bc9a-c4fc-4aa1-acae-032db4aa0582-kube-api-access-b45t8\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.161165 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqz6g\" (UniqueName: \"kubernetes.io/projected/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-kube-api-access-cqz6g\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.161177 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.161188 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.161199 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.161211 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3d6b1f-2011-4f7f-bea0-1d303007fe41-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.161223 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf3315d-3d2f-4aeb-b925-c3832e102e85-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.161236 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqzns\" (UniqueName: \"kubernetes.io/projected/3bf3315d-3d2f-4aeb-b925-c3832e102e85-kube-api-access-fqzns\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.162776 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" (UID: "b8b1faba-e1b8-436c-aa84-ae4353c5f0a9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.163860 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-kube-api-access-q9vz7" (OuterVolumeSpecName: "kube-api-access-q9vz7") pod "b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" (UID: "b8b1faba-e1b8-436c-aa84-ae4353c5f0a9"). InnerVolumeSpecName "kube-api-access-q9vz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.164398 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" (UID: "b8b1faba-e1b8-436c-aa84-ae4353c5f0a9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.228180 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sm9wk"] Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.235087 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3146bc9a-c4fc-4aa1-acae-032db4aa0582-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3146bc9a-c4fc-4aa1-acae-032db4aa0582" (UID: "3146bc9a-c4fc-4aa1-acae-032db4aa0582"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.262208 4810 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.262240 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3146bc9a-c4fc-4aa1-acae-032db4aa0582-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.262249 4810 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.262258 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9vz7\" (UniqueName: \"kubernetes.io/projected/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9-kube-api-access-q9vz7\") on node \"crc\" DevicePath \"\"" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.727600 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" event={"ID":"41d27e40-a89e-4fd6-8106-824c5a257f25","Type":"ContainerStarted","Data":"5ccd1283cec720696319ec135a75750499238b1f385e02afa75450eab89e47aa"} Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.728064 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" event={"ID":"41d27e40-a89e-4fd6-8106-824c5a257f25","Type":"ContainerStarted","Data":"f3a211914284ae52c30528a1fb7055afc9b3b025664d0c15800024f0c3748ddd"} Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.728962 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.730882 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.731985 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rk4vw" event={"ID":"3bf3315d-3d2f-4aeb-b925-c3832e102e85","Type":"ContainerDied","Data":"2a93b7168fafbe84b16d4aeee817860063438d02590293e9edf6bad1699c168a"} Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.732052 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rk4vw" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.732099 4810 scope.go:117] "RemoveContainer" containerID="077475bcb0b82366e9846158038585fb73fc53528d8e624159315394b6489745" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.735182 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gp8sg" event={"ID":"3146bc9a-c4fc-4aa1-acae-032db4aa0582","Type":"ContainerDied","Data":"8f98c34b7848371876e8226d32a5b9a72fad3efa01f7370b3de8b257667df91e"} Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.735296 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gp8sg" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.742520 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" event={"ID":"b8b1faba-e1b8-436c-aa84-ae4353c5f0a9","Type":"ContainerDied","Data":"23f5d4bb5ee04c131b85331f5b6ae4b924ed6b6ac6634c148de3748ba4fdc4ad"} Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.742649 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r2tqm" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.750097 4810 scope.go:117] "RemoveContainer" containerID="2338631bd769d72dd75e6311ec37172a35cf219ae1ae08bee9e394a35d599110" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.762685 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sm9wk" podStartSLOduration=1.76265062 podStartE2EDuration="1.76265062s" podCreationTimestamp="2026-02-19 15:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:15:54.760099783 +0000 UTC m=+384.242129907" watchObservedRunningTime="2026-02-19 15:15:54.76265062 +0000 UTC m=+384.244680774" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.768664 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5ks5" event={"ID":"9a3d6b1f-2011-4f7f-bea0-1d303007fe41","Type":"ContainerDied","Data":"c3d08bc3ddaa041e0392052ef7f026d6557f47271ad22913b60948a058b74b85"} Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.768867 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5ks5" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.771825 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptbh9" event={"ID":"7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53","Type":"ContainerDied","Data":"585fb3dde24254534020203ea59c98681809ba91054fff093daf819845736af6"} Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.772062 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptbh9" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.797482 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rk4vw"] Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.802892 4810 scope.go:117] "RemoveContainer" containerID="dca6d9a99a30ff4bef03f7e86c179f1a1309a876ad27e10ee78ace680fa82510" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.838986 4810 scope.go:117] "RemoveContainer" containerID="f7e09ee181e03814c9bffa4e23b2c1f95c88075e138b0f5ea2eafb7e14efd0cc" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.844680 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rk4vw"] Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.856265 4810 scope.go:117] "RemoveContainer" containerID="678210fe6fd1c3abf47690dcbbfba0fc503dc30b786177c25e50c7cce621be3d" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.863262 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5ks5"] Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.870010 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d5ks5"] Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.897598 4810 scope.go:117] "RemoveContainer" containerID="db4a068bb20ce6903e18758cc6f38e8bff29bd023c7f06ec3db18c434439f0c7" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.900763 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptbh9"] Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.906497 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptbh9"] Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.918698 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r2tqm"] Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.936004 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r2tqm"] Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.936052 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gp8sg"] Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.942949 4810 scope.go:117] "RemoveContainer" containerID="18198b3c1f0a5d1d83b301a87f12dccc9fde533f7e0587694e992331246f2d37" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.943678 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gp8sg"] Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.955154 4810 scope.go:117] "RemoveContainer" containerID="6d399395ed59b272e5ce80489e65062cb3d89e0840588f5165efb67b6a223708" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.966575 4810 scope.go:117] "RemoveContainer" containerID="cf8a20a9712326d7f7917d432dc449c8f1126f425b48ed7c1328c76b1ca7b19c" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.978563 4810 scope.go:117] "RemoveContainer" containerID="1962f43ea8735830650ac9b311ce674cd5cebcb42c9922bc390ae19775d9f9f0" Feb 19 15:15:54 crc kubenswrapper[4810]: I0219 15:15:54.990585 4810 scope.go:117] "RemoveContainer" containerID="3c4b22d47de57aea7d57602a01f5293afd50a4c1301de10fd9d0be527b9184bd" Feb 19 15:15:55 crc kubenswrapper[4810]: I0219 15:15:55.004409 4810 scope.go:117] "RemoveContainer" containerID="2ba8cb7edb1b0d7ff70efe1c621d14ae3cd3fc1499b0b814053d764332922921" Feb 19 15:15:55 crc kubenswrapper[4810]: I0219 15:15:55.019567 4810 scope.go:117] "RemoveContainer" containerID="50a196ba034be9702770fc3245e22281a78913a08bd60f8b507db74c90490792" Feb 19 15:15:55 crc kubenswrapper[4810]: I0219 15:15:55.454728 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3146bc9a-c4fc-4aa1-acae-032db4aa0582" path="/var/lib/kubelet/pods/3146bc9a-c4fc-4aa1-acae-032db4aa0582/volumes" Feb 19 15:15:55 crc kubenswrapper[4810]: I0219 15:15:55.456818 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf3315d-3d2f-4aeb-b925-c3832e102e85" path="/var/lib/kubelet/pods/3bf3315d-3d2f-4aeb-b925-c3832e102e85/volumes" Feb 19 15:15:55 crc kubenswrapper[4810]: I0219 15:15:55.458485 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" path="/var/lib/kubelet/pods/7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53/volumes" Feb 19 15:15:55 crc kubenswrapper[4810]: I0219 15:15:55.461069 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" path="/var/lib/kubelet/pods/9a3d6b1f-2011-4f7f-bea0-1d303007fe41/volumes" Feb 19 15:15:55 crc kubenswrapper[4810]: I0219 15:15:55.462773 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" path="/var/lib/kubelet/pods/b8b1faba-e1b8-436c-aa84-ae4353c5f0a9/volumes" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.042641 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gzkwp"] Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.044705 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" containerName="extract-utilities" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.045011 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" containerName="extract-utilities" Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.045137 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf3315d-3d2f-4aeb-b925-c3832e102e85" containerName="extract-utilities" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.045250 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf3315d-3d2f-4aeb-b925-c3832e102e85" containerName="extract-utilities" Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.045348 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" containerName="extract-utilities" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.045429 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" containerName="extract-utilities" Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.045510 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" containerName="registry-server" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.045626 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" containerName="registry-server" Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.045719 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3146bc9a-c4fc-4aa1-acae-032db4aa0582" containerName="registry-server" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.045730 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3146bc9a-c4fc-4aa1-acae-032db4aa0582" containerName="registry-server" Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.045749 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3146bc9a-c4fc-4aa1-acae-032db4aa0582" containerName="extract-utilities" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.045758 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3146bc9a-c4fc-4aa1-acae-032db4aa0582" containerName="extract-utilities" Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.045775 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" containerName="marketplace-operator" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.045786 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" containerName="marketplace-operator" Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.045798 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" containerName="registry-server" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.045806 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" containerName="registry-server" Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.045824 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf3315d-3d2f-4aeb-b925-c3832e102e85" containerName="extract-content" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.045832 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf3315d-3d2f-4aeb-b925-c3832e102e85" containerName="extract-content" Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.045848 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" containerName="extract-content" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.045856 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" containerName="extract-content" Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.045870 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" containerName="extract-content" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.045877 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" containerName="extract-content" Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.045887 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf3315d-3d2f-4aeb-b925-c3832e102e85" containerName="registry-server" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.045895 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf3315d-3d2f-4aeb-b925-c3832e102e85" containerName="registry-server" Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.045908 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3146bc9a-c4fc-4aa1-acae-032db4aa0582" containerName="extract-content" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.045915 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3146bc9a-c4fc-4aa1-acae-032db4aa0582" containerName="extract-content" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.046214 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" containerName="marketplace-operator" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.046230 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3146bc9a-c4fc-4aa1-acae-032db4aa0582" containerName="registry-server" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.046240 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf3315d-3d2f-4aeb-b925-c3832e102e85" containerName="registry-server" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.046248 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e18c7b6-8ee0-4dbd-a0b2-3830b3a7bc53" containerName="registry-server" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.046257 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a3d6b1f-2011-4f7f-bea0-1d303007fe41" containerName="registry-server" Feb 19 15:15:56 crc kubenswrapper[4810]: E0219 15:15:56.046423 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" containerName="marketplace-operator" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.046432 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" containerName="marketplace-operator" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.046533 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b1faba-e1b8-436c-aa84-ae4353c5f0a9" containerName="marketplace-operator" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.047416 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.050391 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.059524 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gzkwp"] Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.194770 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb41d90e-0896-4229-a19b-a8577292bbf6-catalog-content\") pod \"redhat-operators-gzkwp\" (UID: \"cb41d90e-0896-4229-a19b-a8577292bbf6\") " pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.194816 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s79c7\" (UniqueName: \"kubernetes.io/projected/cb41d90e-0896-4229-a19b-a8577292bbf6-kube-api-access-s79c7\") pod \"redhat-operators-gzkwp\" (UID: \"cb41d90e-0896-4229-a19b-a8577292bbf6\") " pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.194857 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb41d90e-0896-4229-a19b-a8577292bbf6-utilities\") pod \"redhat-operators-gzkwp\" (UID: \"cb41d90e-0896-4229-a19b-a8577292bbf6\") " pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.295675 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s79c7\" (UniqueName: \"kubernetes.io/projected/cb41d90e-0896-4229-a19b-a8577292bbf6-kube-api-access-s79c7\") pod \"redhat-operators-gzkwp\" (UID: \"cb41d90e-0896-4229-a19b-a8577292bbf6\") " pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.295739 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb41d90e-0896-4229-a19b-a8577292bbf6-utilities\") pod \"redhat-operators-gzkwp\" (UID: \"cb41d90e-0896-4229-a19b-a8577292bbf6\") " pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.295792 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb41d90e-0896-4229-a19b-a8577292bbf6-catalog-content\") pod \"redhat-operators-gzkwp\" (UID: \"cb41d90e-0896-4229-a19b-a8577292bbf6\") " pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.296229 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb41d90e-0896-4229-a19b-a8577292bbf6-catalog-content\") pod \"redhat-operators-gzkwp\" (UID: \"cb41d90e-0896-4229-a19b-a8577292bbf6\") " pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.296344 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb41d90e-0896-4229-a19b-a8577292bbf6-utilities\") pod \"redhat-operators-gzkwp\" (UID: \"cb41d90e-0896-4229-a19b-a8577292bbf6\") " pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.319286 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s79c7\" (UniqueName: \"kubernetes.io/projected/cb41d90e-0896-4229-a19b-a8577292bbf6-kube-api-access-s79c7\") pod \"redhat-operators-gzkwp\" (UID: \"cb41d90e-0896-4229-a19b-a8577292bbf6\") " pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.378755 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:15:56 crc kubenswrapper[4810]: W0219 15:15:56.806981 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb41d90e_0896_4229_a19b_a8577292bbf6.slice/crio-359d44fe31f278162e80c02cff4ce2bbbb87a6a8b85fc0045e405dd62ec98267 WatchSource:0}: Error finding container 359d44fe31f278162e80c02cff4ce2bbbb87a6a8b85fc0045e405dd62ec98267: Status 404 returned error can't find the container with id 359d44fe31f278162e80c02cff4ce2bbbb87a6a8b85fc0045e405dd62ec98267 Feb 19 15:15:56 crc kubenswrapper[4810]: I0219 15:15:56.808991 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gzkwp"] Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.049503 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-96zmk"] Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.050970 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.053846 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.055586 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96zmk"] Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.217636 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7wtj\" (UniqueName: \"kubernetes.io/projected/78aaed3c-dfb4-4332-bc63-4fc5342870ae-kube-api-access-z7wtj\") pod \"certified-operators-96zmk\" (UID: \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\") " pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.217730 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aaed3c-dfb4-4332-bc63-4fc5342870ae-utilities\") pod \"certified-operators-96zmk\" (UID: \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\") " pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.217810 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aaed3c-dfb4-4332-bc63-4fc5342870ae-catalog-content\") pod \"certified-operators-96zmk\" (UID: \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\") " pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.320228 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7wtj\" (UniqueName: \"kubernetes.io/projected/78aaed3c-dfb4-4332-bc63-4fc5342870ae-kube-api-access-z7wtj\") pod \"certified-operators-96zmk\" (UID: \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\") " pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.320398 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aaed3c-dfb4-4332-bc63-4fc5342870ae-utilities\") pod \"certified-operators-96zmk\" (UID: \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\") " pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.320473 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aaed3c-dfb4-4332-bc63-4fc5342870ae-catalog-content\") pod \"certified-operators-96zmk\" (UID: \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\") " pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.321521 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aaed3c-dfb4-4332-bc63-4fc5342870ae-utilities\") pod \"certified-operators-96zmk\" (UID: \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\") " pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.323680 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aaed3c-dfb4-4332-bc63-4fc5342870ae-catalog-content\") pod \"certified-operators-96zmk\" (UID: \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\") " pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.344509 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7wtj\" (UniqueName: \"kubernetes.io/projected/78aaed3c-dfb4-4332-bc63-4fc5342870ae-kube-api-access-z7wtj\") pod \"certified-operators-96zmk\" (UID: \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\") " pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.369832 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.791951 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96zmk"] Feb 19 15:15:57 crc kubenswrapper[4810]: W0219 15:15:57.802701 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78aaed3c_dfb4_4332_bc63_4fc5342870ae.slice/crio-dc3bb4ddeed72ae65273594c85ea1bf8c3ee8a4155cd199ebba0a1a239217684 WatchSource:0}: Error finding container dc3bb4ddeed72ae65273594c85ea1bf8c3ee8a4155cd199ebba0a1a239217684: Status 404 returned error can't find the container with id dc3bb4ddeed72ae65273594c85ea1bf8c3ee8a4155cd199ebba0a1a239217684 Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.816748 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96zmk" event={"ID":"78aaed3c-dfb4-4332-bc63-4fc5342870ae","Type":"ContainerStarted","Data":"dc3bb4ddeed72ae65273594c85ea1bf8c3ee8a4155cd199ebba0a1a239217684"} Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.818886 4810 generic.go:334] "Generic (PLEG): container finished" podID="cb41d90e-0896-4229-a19b-a8577292bbf6" containerID="446909cef9363d01f975cdf4a69b340a8cfb1e1ae9bddb9ba40e5d706fc6a797" exitCode=0 Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.819143 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzkwp" event={"ID":"cb41d90e-0896-4229-a19b-a8577292bbf6","Type":"ContainerDied","Data":"446909cef9363d01f975cdf4a69b340a8cfb1e1ae9bddb9ba40e5d706fc6a797"} Feb 19 15:15:57 crc kubenswrapper[4810]: I0219 15:15:57.819199 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzkwp" event={"ID":"cb41d90e-0896-4229-a19b-a8577292bbf6","Type":"ContainerStarted","Data":"359d44fe31f278162e80c02cff4ce2bbbb87a6a8b85fc0045e405dd62ec98267"} Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.448852 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v68v6"] Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.450518 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.452371 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.455388 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v68v6"] Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.538465 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935efdc2-5596-4207-a27b-68a8a39b6529-catalog-content\") pod \"community-operators-v68v6\" (UID: \"935efdc2-5596-4207-a27b-68a8a39b6529\") " pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.538570 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935efdc2-5596-4207-a27b-68a8a39b6529-utilities\") pod \"community-operators-v68v6\" (UID: \"935efdc2-5596-4207-a27b-68a8a39b6529\") " pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.538639 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlxt4\" (UniqueName: \"kubernetes.io/projected/935efdc2-5596-4207-a27b-68a8a39b6529-kube-api-access-dlxt4\") pod \"community-operators-v68v6\" (UID: \"935efdc2-5596-4207-a27b-68a8a39b6529\") " pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.640060 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlxt4\" (UniqueName: \"kubernetes.io/projected/935efdc2-5596-4207-a27b-68a8a39b6529-kube-api-access-dlxt4\") pod \"community-operators-v68v6\" (UID: \"935efdc2-5596-4207-a27b-68a8a39b6529\") " pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.640133 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935efdc2-5596-4207-a27b-68a8a39b6529-catalog-content\") pod \"community-operators-v68v6\" (UID: \"935efdc2-5596-4207-a27b-68a8a39b6529\") " pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.640180 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935efdc2-5596-4207-a27b-68a8a39b6529-utilities\") pod \"community-operators-v68v6\" (UID: \"935efdc2-5596-4207-a27b-68a8a39b6529\") " pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.640683 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935efdc2-5596-4207-a27b-68a8a39b6529-utilities\") pod \"community-operators-v68v6\" (UID: \"935efdc2-5596-4207-a27b-68a8a39b6529\") " pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.640713 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935efdc2-5596-4207-a27b-68a8a39b6529-catalog-content\") pod \"community-operators-v68v6\" (UID: \"935efdc2-5596-4207-a27b-68a8a39b6529\") " pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.661136 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlxt4\" (UniqueName: \"kubernetes.io/projected/935efdc2-5596-4207-a27b-68a8a39b6529-kube-api-access-dlxt4\") pod \"community-operators-v68v6\" (UID: \"935efdc2-5596-4207-a27b-68a8a39b6529\") " pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.775635 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.829488 4810 generic.go:334] "Generic (PLEG): container finished" podID="78aaed3c-dfb4-4332-bc63-4fc5342870ae" containerID="389e0276f99c2489270efb421c00ea721017dbf02f8163a0b8723a2949f2384c" exitCode=0 Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.829572 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96zmk" event={"ID":"78aaed3c-dfb4-4332-bc63-4fc5342870ae","Type":"ContainerDied","Data":"389e0276f99c2489270efb421c00ea721017dbf02f8163a0b8723a2949f2384c"} Feb 19 15:15:58 crc kubenswrapper[4810]: I0219 15:15:58.993217 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v68v6"] Feb 19 15:15:59 crc kubenswrapper[4810]: W0219 15:15:59.006026 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod935efdc2_5596_4207_a27b_68a8a39b6529.slice/crio-514f59846afe5a035371037e15fe72e417c83d08b819931a1c6ac5a2603476da WatchSource:0}: Error finding container 514f59846afe5a035371037e15fe72e417c83d08b819931a1c6ac5a2603476da: Status 404 returned error can't find the container with id 514f59846afe5a035371037e15fe72e417c83d08b819931a1c6ac5a2603476da Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.448499 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tgtg8"] Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.451078 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.455725 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.457680 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgtg8"] Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.557880 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5szq\" (UniqueName: \"kubernetes.io/projected/b3c2bc60-712d-4ef6-b461-ad683f51f2e4-kube-api-access-h5szq\") pod \"redhat-marketplace-tgtg8\" (UID: \"b3c2bc60-712d-4ef6-b461-ad683f51f2e4\") " pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.558020 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c2bc60-712d-4ef6-b461-ad683f51f2e4-utilities\") pod \"redhat-marketplace-tgtg8\" (UID: \"b3c2bc60-712d-4ef6-b461-ad683f51f2e4\") " pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.558136 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c2bc60-712d-4ef6-b461-ad683f51f2e4-catalog-content\") pod \"redhat-marketplace-tgtg8\" (UID: \"b3c2bc60-712d-4ef6-b461-ad683f51f2e4\") " pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.659647 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c2bc60-712d-4ef6-b461-ad683f51f2e4-catalog-content\") pod \"redhat-marketplace-tgtg8\" (UID: \"b3c2bc60-712d-4ef6-b461-ad683f51f2e4\") " pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.659786 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5szq\" (UniqueName: \"kubernetes.io/projected/b3c2bc60-712d-4ef6-b461-ad683f51f2e4-kube-api-access-h5szq\") pod \"redhat-marketplace-tgtg8\" (UID: \"b3c2bc60-712d-4ef6-b461-ad683f51f2e4\") " pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.659865 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c2bc60-712d-4ef6-b461-ad683f51f2e4-utilities\") pod \"redhat-marketplace-tgtg8\" (UID: \"b3c2bc60-712d-4ef6-b461-ad683f51f2e4\") " pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.660337 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c2bc60-712d-4ef6-b461-ad683f51f2e4-catalog-content\") pod \"redhat-marketplace-tgtg8\" (UID: \"b3c2bc60-712d-4ef6-b461-ad683f51f2e4\") " pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.660754 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c2bc60-712d-4ef6-b461-ad683f51f2e4-utilities\") pod \"redhat-marketplace-tgtg8\" (UID: \"b3c2bc60-712d-4ef6-b461-ad683f51f2e4\") " pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.698758 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5szq\" (UniqueName: \"kubernetes.io/projected/b3c2bc60-712d-4ef6-b461-ad683f51f2e4-kube-api-access-h5szq\") pod \"redhat-marketplace-tgtg8\" (UID: \"b3c2bc60-712d-4ef6-b461-ad683f51f2e4\") " pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.774874 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.846561 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzkwp" event={"ID":"cb41d90e-0896-4229-a19b-a8577292bbf6","Type":"ContainerStarted","Data":"563ec001d1590fa9c16e3de9515c66b2a8327d12667295252c48a5230c733ba1"} Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.856548 4810 generic.go:334] "Generic (PLEG): container finished" podID="935efdc2-5596-4207-a27b-68a8a39b6529" containerID="1bf8e18430d52e1bdffbf7e432c09ccbb227c1161c0321cdc60f7b17956e0999" exitCode=0 Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.856729 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v68v6" event={"ID":"935efdc2-5596-4207-a27b-68a8a39b6529","Type":"ContainerDied","Data":"1bf8e18430d52e1bdffbf7e432c09ccbb227c1161c0321cdc60f7b17956e0999"} Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.856923 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v68v6" event={"ID":"935efdc2-5596-4207-a27b-68a8a39b6529","Type":"ContainerStarted","Data":"514f59846afe5a035371037e15fe72e417c83d08b819931a1c6ac5a2603476da"} Feb 19 15:15:59 crc kubenswrapper[4810]: I0219 15:15:59.870669 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96zmk" event={"ID":"78aaed3c-dfb4-4332-bc63-4fc5342870ae","Type":"ContainerStarted","Data":"6b9165a9af4fc64c1f899ec8f99221cc29231316d69d504fd16ef6a4ce57525c"} Feb 19 15:16:00 crc kubenswrapper[4810]: I0219 15:16:00.048709 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgtg8"] Feb 19 15:16:00 crc kubenswrapper[4810]: I0219 15:16:00.878296 4810 generic.go:334] "Generic (PLEG): container finished" podID="cb41d90e-0896-4229-a19b-a8577292bbf6" containerID="563ec001d1590fa9c16e3de9515c66b2a8327d12667295252c48a5230c733ba1" exitCode=0 Feb 19 15:16:00 crc kubenswrapper[4810]: I0219 15:16:00.878366 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzkwp" event={"ID":"cb41d90e-0896-4229-a19b-a8577292bbf6","Type":"ContainerDied","Data":"563ec001d1590fa9c16e3de9515c66b2a8327d12667295252c48a5230c733ba1"} Feb 19 15:16:00 crc kubenswrapper[4810]: I0219 15:16:00.881286 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v68v6" event={"ID":"935efdc2-5596-4207-a27b-68a8a39b6529","Type":"ContainerStarted","Data":"454b485efdf0b632d0bf68a7c004625d4bec4676c25fb0c84c862bf5d9832936"} Feb 19 15:16:00 crc kubenswrapper[4810]: I0219 15:16:00.885442 4810 generic.go:334] "Generic (PLEG): container finished" podID="b3c2bc60-712d-4ef6-b461-ad683f51f2e4" containerID="87e5190564db63178273d5c86a9e7c06d079f45801be5670341f89e74e1884b5" exitCode=0 Feb 19 15:16:00 crc kubenswrapper[4810]: I0219 15:16:00.885648 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgtg8" event={"ID":"b3c2bc60-712d-4ef6-b461-ad683f51f2e4","Type":"ContainerDied","Data":"87e5190564db63178273d5c86a9e7c06d079f45801be5670341f89e74e1884b5"} Feb 19 15:16:00 crc kubenswrapper[4810]: I0219 15:16:00.885696 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgtg8" event={"ID":"b3c2bc60-712d-4ef6-b461-ad683f51f2e4","Type":"ContainerStarted","Data":"a80ffb892789a1e8904a911324da9b863602e3697bb3ca2836be7f70a34bcf90"} Feb 19 15:16:00 crc kubenswrapper[4810]: I0219 15:16:00.888676 4810 generic.go:334] "Generic (PLEG): container finished" podID="78aaed3c-dfb4-4332-bc63-4fc5342870ae" containerID="6b9165a9af4fc64c1f899ec8f99221cc29231316d69d504fd16ef6a4ce57525c" exitCode=0 Feb 19 15:16:00 crc kubenswrapper[4810]: I0219 15:16:00.889838 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96zmk" event={"ID":"78aaed3c-dfb4-4332-bc63-4fc5342870ae","Type":"ContainerDied","Data":"6b9165a9af4fc64c1f899ec8f99221cc29231316d69d504fd16ef6a4ce57525c"} Feb 19 15:16:01 crc kubenswrapper[4810]: I0219 15:16:01.899553 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzkwp" event={"ID":"cb41d90e-0896-4229-a19b-a8577292bbf6","Type":"ContainerStarted","Data":"4dcb06a25fb7e17e0617e078d6ec18aa0f057a2f73d88aa7a8ec24cf486c4034"} Feb 19 15:16:01 crc kubenswrapper[4810]: I0219 15:16:01.900961 4810 generic.go:334] "Generic (PLEG): container finished" podID="935efdc2-5596-4207-a27b-68a8a39b6529" containerID="454b485efdf0b632d0bf68a7c004625d4bec4676c25fb0c84c862bf5d9832936" exitCode=0 Feb 19 15:16:01 crc kubenswrapper[4810]: I0219 15:16:01.901025 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v68v6" event={"ID":"935efdc2-5596-4207-a27b-68a8a39b6529","Type":"ContainerDied","Data":"454b485efdf0b632d0bf68a7c004625d4bec4676c25fb0c84c862bf5d9832936"} Feb 19 15:16:01 crc kubenswrapper[4810]: I0219 15:16:01.902823 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgtg8" event={"ID":"b3c2bc60-712d-4ef6-b461-ad683f51f2e4","Type":"ContainerStarted","Data":"a7e5546cff139c5de86becf0e667166b7b8333859007a285f8ba2d1bb6ebca8d"} Feb 19 15:16:01 crc kubenswrapper[4810]: I0219 15:16:01.904909 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96zmk" event={"ID":"78aaed3c-dfb4-4332-bc63-4fc5342870ae","Type":"ContainerStarted","Data":"d3b7a9028d72c7eb4783b51f96495dc71d42e8907df523028cceebd689902934"} Feb 19 15:16:01 crc kubenswrapper[4810]: I0219 15:16:01.923904 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gzkwp" podStartSLOduration=2.414785366 podStartE2EDuration="5.923871138s" podCreationTimestamp="2026-02-19 15:15:56 +0000 UTC" firstStartedPulling="2026-02-19 15:15:57.826173397 +0000 UTC m=+387.308203521" lastFinishedPulling="2026-02-19 15:16:01.335259169 +0000 UTC m=+390.817289293" observedRunningTime="2026-02-19 15:16:01.918639914 +0000 UTC m=+391.400670048" watchObservedRunningTime="2026-02-19 15:16:01.923871138 +0000 UTC m=+391.405901252" Feb 19 15:16:01 crc kubenswrapper[4810]: I0219 15:16:01.963747 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-96zmk" podStartSLOduration=2.4759718680000002 podStartE2EDuration="4.963723508s" podCreationTimestamp="2026-02-19 15:15:57 +0000 UTC" firstStartedPulling="2026-02-19 15:15:58.831215742 +0000 UTC m=+388.313245866" lastFinishedPulling="2026-02-19 15:16:01.318967382 +0000 UTC m=+390.800997506" observedRunningTime="2026-02-19 15:16:01.959864266 +0000 UTC m=+391.441894390" watchObservedRunningTime="2026-02-19 15:16:01.963723508 +0000 UTC m=+391.445753632" Feb 19 15:16:02 crc kubenswrapper[4810]: I0219 15:16:02.912821 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v68v6" event={"ID":"935efdc2-5596-4207-a27b-68a8a39b6529","Type":"ContainerStarted","Data":"fdb2805a108d53dbae6af5f6571743f61b51164e75890b28023d46aa9db3310a"} Feb 19 15:16:02 crc kubenswrapper[4810]: I0219 15:16:02.916389 4810 generic.go:334] "Generic (PLEG): container finished" podID="b3c2bc60-712d-4ef6-b461-ad683f51f2e4" containerID="a7e5546cff139c5de86becf0e667166b7b8333859007a285f8ba2d1bb6ebca8d" exitCode=0 Feb 19 15:16:02 crc kubenswrapper[4810]: I0219 15:16:02.917178 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgtg8" event={"ID":"b3c2bc60-712d-4ef6-b461-ad683f51f2e4","Type":"ContainerDied","Data":"a7e5546cff139c5de86becf0e667166b7b8333859007a285f8ba2d1bb6ebca8d"} Feb 19 15:16:02 crc kubenswrapper[4810]: I0219 15:16:02.933738 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v68v6" podStartSLOduration=2.450233944 podStartE2EDuration="4.933719433s" podCreationTimestamp="2026-02-19 15:15:58 +0000 UTC" firstStartedPulling="2026-02-19 15:15:59.86258381 +0000 UTC m=+389.344613934" lastFinishedPulling="2026-02-19 15:16:02.346069299 +0000 UTC m=+391.828099423" observedRunningTime="2026-02-19 15:16:02.928216852 +0000 UTC m=+392.410246976" watchObservedRunningTime="2026-02-19 15:16:02.933719433 +0000 UTC m=+392.415749557" Feb 19 15:16:03 crc kubenswrapper[4810]: I0219 15:16:03.924711 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgtg8" event={"ID":"b3c2bc60-712d-4ef6-b461-ad683f51f2e4","Type":"ContainerStarted","Data":"8739789f355fa198e1e69c2a52485d5504ce568f33b25ddb44e09767e21bad1c"} Feb 19 15:16:03 crc kubenswrapper[4810]: I0219 15:16:03.951444 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tgtg8" podStartSLOduration=2.284627957 podStartE2EDuration="4.951427035s" podCreationTimestamp="2026-02-19 15:15:59 +0000 UTC" firstStartedPulling="2026-02-19 15:16:00.886812407 +0000 UTC m=+390.368842561" lastFinishedPulling="2026-02-19 15:16:03.553611495 +0000 UTC m=+393.035641639" observedRunningTime="2026-02-19 15:16:03.950846861 +0000 UTC m=+393.432876985" watchObservedRunningTime="2026-02-19 15:16:03.951427035 +0000 UTC m=+393.433457159" Feb 19 15:16:06 crc kubenswrapper[4810]: I0219 15:16:06.379067 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:16:06 crc kubenswrapper[4810]: I0219 15:16:06.380901 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:16:07 crc kubenswrapper[4810]: I0219 15:16:07.370648 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:16:07 crc kubenswrapper[4810]: I0219 15:16:07.370979 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:16:07 crc kubenswrapper[4810]: I0219 15:16:07.412197 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:16:07 crc kubenswrapper[4810]: I0219 15:16:07.444705 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzkwp" podUID="cb41d90e-0896-4229-a19b-a8577292bbf6" containerName="registry-server" probeResult="failure" output=< Feb 19 15:16:07 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 15:16:07 crc kubenswrapper[4810]: > Feb 19 15:16:08 crc kubenswrapper[4810]: I0219 15:16:08.007787 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-96zmk" Feb 19 15:16:08 crc kubenswrapper[4810]: I0219 15:16:08.776483 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:16:08 crc kubenswrapper[4810]: I0219 15:16:08.776852 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:16:08 crc kubenswrapper[4810]: I0219 15:16:08.823534 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:16:09 crc kubenswrapper[4810]: I0219 15:16:09.026435 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v68v6" Feb 19 15:16:09 crc kubenswrapper[4810]: I0219 15:16:09.776474 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:16:09 crc kubenswrapper[4810]: I0219 15:16:09.776524 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:16:09 crc kubenswrapper[4810]: I0219 15:16:09.812182 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:16:10 crc kubenswrapper[4810]: I0219 15:16:10.051440 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tgtg8" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.293483 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" podUID="a6cb5092-5f01-4dd9-a940-804d88907744" containerName="registry" containerID="cri-o://5251301af1540af7d5ebbbedf95175c9c2ed8c2deb1460dfe0ea2da0c4971799" gracePeriod=30 Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.430293 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.490001 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.678344 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.842166 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a6cb5092-5f01-4dd9-a940-804d88907744-registry-certificates\") pod \"a6cb5092-5f01-4dd9-a940-804d88907744\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.842252 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-bound-sa-token\") pod \"a6cb5092-5f01-4dd9-a940-804d88907744\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.842595 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a6cb5092-5f01-4dd9-a940-804d88907744\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.842672 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g96x2\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-kube-api-access-g96x2\") pod \"a6cb5092-5f01-4dd9-a940-804d88907744\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.842738 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-registry-tls\") pod \"a6cb5092-5f01-4dd9-a940-804d88907744\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.842818 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a6cb5092-5f01-4dd9-a940-804d88907744-ca-trust-extracted\") pod \"a6cb5092-5f01-4dd9-a940-804d88907744\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.842876 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a6cb5092-5f01-4dd9-a940-804d88907744-installation-pull-secrets\") pod \"a6cb5092-5f01-4dd9-a940-804d88907744\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.842943 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6cb5092-5f01-4dd9-a940-804d88907744-trusted-ca\") pod \"a6cb5092-5f01-4dd9-a940-804d88907744\" (UID: \"a6cb5092-5f01-4dd9-a940-804d88907744\") " Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.843249 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6cb5092-5f01-4dd9-a940-804d88907744-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a6cb5092-5f01-4dd9-a940-804d88907744" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.843529 4810 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a6cb5092-5f01-4dd9-a940-804d88907744-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.845168 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6cb5092-5f01-4dd9-a940-804d88907744-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a6cb5092-5f01-4dd9-a940-804d88907744" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.848717 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a6cb5092-5f01-4dd9-a940-804d88907744" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.850551 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-kube-api-access-g96x2" (OuterVolumeSpecName: "kube-api-access-g96x2") pod "a6cb5092-5f01-4dd9-a940-804d88907744" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744"). InnerVolumeSpecName "kube-api-access-g96x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.850762 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a6cb5092-5f01-4dd9-a940-804d88907744" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.852689 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6cb5092-5f01-4dd9-a940-804d88907744-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a6cb5092-5f01-4dd9-a940-804d88907744" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.859519 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6cb5092-5f01-4dd9-a940-804d88907744-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a6cb5092-5f01-4dd9-a940-804d88907744" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.859898 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a6cb5092-5f01-4dd9-a940-804d88907744" (UID: "a6cb5092-5f01-4dd9-a940-804d88907744"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.944559 4810 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a6cb5092-5f01-4dd9-a940-804d88907744-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.944606 4810 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a6cb5092-5f01-4dd9-a940-804d88907744-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.944620 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6cb5092-5f01-4dd9-a940-804d88907744-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.944628 4810 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.944636 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g96x2\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-kube-api-access-g96x2\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:16 crc kubenswrapper[4810]: I0219 15:16:16.944646 4810 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a6cb5092-5f01-4dd9-a940-804d88907744-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:16:17 crc kubenswrapper[4810]: I0219 15:16:17.006756 4810 generic.go:334] "Generic (PLEG): container finished" podID="a6cb5092-5f01-4dd9-a940-804d88907744" containerID="5251301af1540af7d5ebbbedf95175c9c2ed8c2deb1460dfe0ea2da0c4971799" exitCode=0 Feb 19 15:16:17 crc kubenswrapper[4810]: I0219 15:16:17.006807 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" event={"ID":"a6cb5092-5f01-4dd9-a940-804d88907744","Type":"ContainerDied","Data":"5251301af1540af7d5ebbbedf95175c9c2ed8c2deb1460dfe0ea2da0c4971799"} Feb 19 15:16:17 crc kubenswrapper[4810]: I0219 15:16:17.006863 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" event={"ID":"a6cb5092-5f01-4dd9-a940-804d88907744","Type":"ContainerDied","Data":"958cccf7fb34847e1b79d6753f3c0b3ac89a306a0da6aa21a70c7c233870987b"} Feb 19 15:16:17 crc kubenswrapper[4810]: I0219 15:16:17.006890 4810 scope.go:117] "RemoveContainer" containerID="5251301af1540af7d5ebbbedf95175c9c2ed8c2deb1460dfe0ea2da0c4971799" Feb 19 15:16:17 crc kubenswrapper[4810]: I0219 15:16:17.007069 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7bnq2" Feb 19 15:16:17 crc kubenswrapper[4810]: I0219 15:16:17.029034 4810 scope.go:117] "RemoveContainer" containerID="5251301af1540af7d5ebbbedf95175c9c2ed8c2deb1460dfe0ea2da0c4971799" Feb 19 15:16:17 crc kubenswrapper[4810]: E0219 15:16:17.029510 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5251301af1540af7d5ebbbedf95175c9c2ed8c2deb1460dfe0ea2da0c4971799\": container with ID starting with 5251301af1540af7d5ebbbedf95175c9c2ed8c2deb1460dfe0ea2da0c4971799 not found: ID does not exist" containerID="5251301af1540af7d5ebbbedf95175c9c2ed8c2deb1460dfe0ea2da0c4971799" Feb 19 15:16:17 crc kubenswrapper[4810]: I0219 15:16:17.029556 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5251301af1540af7d5ebbbedf95175c9c2ed8c2deb1460dfe0ea2da0c4971799"} err="failed to get container status \"5251301af1540af7d5ebbbedf95175c9c2ed8c2deb1460dfe0ea2da0c4971799\": rpc error: code = NotFound desc = could not find container \"5251301af1540af7d5ebbbedf95175c9c2ed8c2deb1460dfe0ea2da0c4971799\": container with ID starting with 5251301af1540af7d5ebbbedf95175c9c2ed8c2deb1460dfe0ea2da0c4971799 not found: ID does not exist" Feb 19 15:16:17 crc kubenswrapper[4810]: I0219 15:16:17.055938 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7bnq2"] Feb 19 15:16:17 crc kubenswrapper[4810]: I0219 15:16:17.062494 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7bnq2"] Feb 19 15:16:17 crc kubenswrapper[4810]: I0219 15:16:17.450845 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6cb5092-5f01-4dd9-a940-804d88907744" path="/var/lib/kubelet/pods/a6cb5092-5f01-4dd9-a940-804d88907744/volumes" Feb 19 15:16:19 crc kubenswrapper[4810]: I0219 15:16:19.538141 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:16:19 crc kubenswrapper[4810]: I0219 15:16:19.538552 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:16:19 crc kubenswrapper[4810]: I0219 15:16:19.538605 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:16:19 crc kubenswrapper[4810]: I0219 15:16:19.539255 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4a50af848183862ac97eb1221e89b46a5711d55d54c9bd96026946aef1766658"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:16:19 crc kubenswrapper[4810]: I0219 15:16:19.539335 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://4a50af848183862ac97eb1221e89b46a5711d55d54c9bd96026946aef1766658" gracePeriod=600 Feb 19 15:16:20 crc kubenswrapper[4810]: I0219 15:16:20.026442 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="4a50af848183862ac97eb1221e89b46a5711d55d54c9bd96026946aef1766658" exitCode=0 Feb 19 15:16:20 crc kubenswrapper[4810]: I0219 15:16:20.026502 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"4a50af848183862ac97eb1221e89b46a5711d55d54c9bd96026946aef1766658"} Feb 19 15:16:20 crc kubenswrapper[4810]: I0219 15:16:20.026867 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"b445a122b966a7403b8df4638cf97036239996a24e8ace1fab9b55e591849bf5"} Feb 19 15:16:20 crc kubenswrapper[4810]: I0219 15:16:20.026899 4810 scope.go:117] "RemoveContainer" containerID="ba65a8abc0841494ba4596019b005e3cbb3f3c0897bef615b51a2ab10aa9e651" Feb 19 15:18:19 crc kubenswrapper[4810]: I0219 15:18:19.538428 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:18:19 crc kubenswrapper[4810]: I0219 15:18:19.539063 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:18:31 crc kubenswrapper[4810]: I0219 15:18:31.729302 4810 scope.go:117] "RemoveContainer" containerID="ae8bc998d14d3fe5b46f631e4cd6f287af277e6334648d3823ae6a448b5c6c06" Feb 19 15:18:49 crc kubenswrapper[4810]: I0219 15:18:49.538012 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:18:49 crc kubenswrapper[4810]: I0219 15:18:49.539190 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:19:19 crc kubenswrapper[4810]: I0219 15:19:19.537487 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:19:19 crc kubenswrapper[4810]: I0219 15:19:19.539520 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:19:19 crc kubenswrapper[4810]: I0219 15:19:19.539601 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:19:19 crc kubenswrapper[4810]: I0219 15:19:19.540528 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b445a122b966a7403b8df4638cf97036239996a24e8ace1fab9b55e591849bf5"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:19:19 crc kubenswrapper[4810]: I0219 15:19:19.540632 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://b445a122b966a7403b8df4638cf97036239996a24e8ace1fab9b55e591849bf5" gracePeriod=600 Feb 19 15:19:20 crc kubenswrapper[4810]: I0219 15:19:20.452477 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="b445a122b966a7403b8df4638cf97036239996a24e8ace1fab9b55e591849bf5" exitCode=0 Feb 19 15:19:20 crc kubenswrapper[4810]: I0219 15:19:20.452568 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"b445a122b966a7403b8df4638cf97036239996a24e8ace1fab9b55e591849bf5"} Feb 19 15:19:20 crc kubenswrapper[4810]: I0219 15:19:20.452652 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"946b41284ed03248aebd830c7fb80426be59078e4ea2a93cd09930514fedec98"} Feb 19 15:19:20 crc kubenswrapper[4810]: I0219 15:19:20.452672 4810 scope.go:117] "RemoveContainer" containerID="4a50af848183862ac97eb1221e89b46a5711d55d54c9bd96026946aef1766658" Feb 19 15:19:31 crc kubenswrapper[4810]: I0219 15:19:31.770900 4810 scope.go:117] "RemoveContainer" containerID="a2b82759b901660a79c47e74ce7b041d5164f2bc3f90bc7fb84fbbfb09e3a3e7" Feb 19 15:19:31 crc kubenswrapper[4810]: I0219 15:19:31.799806 4810 scope.go:117] "RemoveContainer" containerID="32a771c22d7bf5ec3610bad3dd5fe81ac2ad5a407f7d5f0293d46b10d47c380d" Feb 19 15:19:31 crc kubenswrapper[4810]: I0219 15:19:31.817518 4810 scope.go:117] "RemoveContainer" containerID="471901a772079c16e7d5c328b745070f868fa529208773cf4e67ab79b2945769" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.935451 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-x4csq"] Feb 19 15:20:43 crc kubenswrapper[4810]: E0219 15:20:43.936440 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cb5092-5f01-4dd9-a940-804d88907744" containerName="registry" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.936464 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cb5092-5f01-4dd9-a940-804d88907744" containerName="registry" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.936645 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6cb5092-5f01-4dd9-a940-804d88907744" containerName="registry" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.937202 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x4csq" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.946083 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.946175 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.946205 4810 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-cnlds" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.960057 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-fwh4x"] Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.961731 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-fwh4x" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.963479 4810 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-pghqb" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.964677 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-x4csq"] Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.969499 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-7lspv"] Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.970353 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-7lspv" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.972022 4810 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-7dgbh" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.979547 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-fwh4x"] Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.991995 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-7lspv"] Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.997973 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpcbb\" (UniqueName: \"kubernetes.io/projected/54e755f0-9c2f-4d47-9979-b7b92996bab6-kube-api-access-mpcbb\") pod \"cert-manager-cainjector-cf98fcc89-x4csq\" (UID: \"54e755f0-9c2f-4d47-9979-b7b92996bab6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-x4csq" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.998020 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gh9f\" (UniqueName: \"kubernetes.io/projected/02206e32-6f49-407e-a02b-ce61e3daabf6-kube-api-access-5gh9f\") pod \"cert-manager-858654f9db-fwh4x\" (UID: \"02206e32-6f49-407e-a02b-ce61e3daabf6\") " pod="cert-manager/cert-manager-858654f9db-fwh4x" Feb 19 15:20:43 crc kubenswrapper[4810]: I0219 15:20:43.998046 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tf7b\" (UniqueName: \"kubernetes.io/projected/1224bbe4-6d8e-410e-8990-3813efdd2003-kube-api-access-4tf7b\") pod \"cert-manager-webhook-687f57d79b-7lspv\" (UID: \"1224bbe4-6d8e-410e-8990-3813efdd2003\") " pod="cert-manager/cert-manager-webhook-687f57d79b-7lspv" Feb 19 15:20:44 crc kubenswrapper[4810]: I0219 15:20:44.098536 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpcbb\" (UniqueName: \"kubernetes.io/projected/54e755f0-9c2f-4d47-9979-b7b92996bab6-kube-api-access-mpcbb\") pod \"cert-manager-cainjector-cf98fcc89-x4csq\" (UID: \"54e755f0-9c2f-4d47-9979-b7b92996bab6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-x4csq" Feb 19 15:20:44 crc kubenswrapper[4810]: I0219 15:20:44.098586 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gh9f\" (UniqueName: \"kubernetes.io/projected/02206e32-6f49-407e-a02b-ce61e3daabf6-kube-api-access-5gh9f\") pod \"cert-manager-858654f9db-fwh4x\" (UID: \"02206e32-6f49-407e-a02b-ce61e3daabf6\") " pod="cert-manager/cert-manager-858654f9db-fwh4x" Feb 19 15:20:44 crc kubenswrapper[4810]: I0219 15:20:44.098603 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tf7b\" (UniqueName: \"kubernetes.io/projected/1224bbe4-6d8e-410e-8990-3813efdd2003-kube-api-access-4tf7b\") pod \"cert-manager-webhook-687f57d79b-7lspv\" (UID: \"1224bbe4-6d8e-410e-8990-3813efdd2003\") " pod="cert-manager/cert-manager-webhook-687f57d79b-7lspv" Feb 19 15:20:44 crc kubenswrapper[4810]: I0219 15:20:44.117389 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tf7b\" (UniqueName: \"kubernetes.io/projected/1224bbe4-6d8e-410e-8990-3813efdd2003-kube-api-access-4tf7b\") pod \"cert-manager-webhook-687f57d79b-7lspv\" (UID: \"1224bbe4-6d8e-410e-8990-3813efdd2003\") " pod="cert-manager/cert-manager-webhook-687f57d79b-7lspv" Feb 19 15:20:44 crc kubenswrapper[4810]: I0219 15:20:44.121966 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpcbb\" (UniqueName: \"kubernetes.io/projected/54e755f0-9c2f-4d47-9979-b7b92996bab6-kube-api-access-mpcbb\") pod \"cert-manager-cainjector-cf98fcc89-x4csq\" (UID: \"54e755f0-9c2f-4d47-9979-b7b92996bab6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-x4csq" Feb 19 15:20:44 crc kubenswrapper[4810]: I0219 15:20:44.122534 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gh9f\" (UniqueName: \"kubernetes.io/projected/02206e32-6f49-407e-a02b-ce61e3daabf6-kube-api-access-5gh9f\") pod \"cert-manager-858654f9db-fwh4x\" (UID: \"02206e32-6f49-407e-a02b-ce61e3daabf6\") " pod="cert-manager/cert-manager-858654f9db-fwh4x" Feb 19 15:20:44 crc kubenswrapper[4810]: I0219 15:20:44.268418 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x4csq" Feb 19 15:20:44 crc kubenswrapper[4810]: I0219 15:20:44.291914 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-fwh4x" Feb 19 15:20:44 crc kubenswrapper[4810]: I0219 15:20:44.303145 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-7lspv" Feb 19 15:20:44 crc kubenswrapper[4810]: I0219 15:20:44.488654 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-x4csq"] Feb 19 15:20:44 crc kubenswrapper[4810]: I0219 15:20:44.492848 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:20:44 crc kubenswrapper[4810]: I0219 15:20:44.525539 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-fwh4x"] Feb 19 15:20:44 crc kubenswrapper[4810]: I0219 15:20:44.554152 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-7lspv"] Feb 19 15:20:44 crc kubenswrapper[4810]: W0219 15:20:44.556187 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1224bbe4_6d8e_410e_8990_3813efdd2003.slice/crio-4492040e017c7e7b949146ade8671463d53259f0ae04d8eb151c438342128a85 WatchSource:0}: Error finding container 4492040e017c7e7b949146ade8671463d53259f0ae04d8eb151c438342128a85: Status 404 returned error can't find the container with id 4492040e017c7e7b949146ade8671463d53259f0ae04d8eb151c438342128a85 Feb 19 15:20:45 crc kubenswrapper[4810]: I0219 15:20:45.005613 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x4csq" event={"ID":"54e755f0-9c2f-4d47-9979-b7b92996bab6","Type":"ContainerStarted","Data":"34966eab10f57cd468bea1e6ace89e02275506b5d5987b7abee5a409a9579582"} Feb 19 15:20:45 crc kubenswrapper[4810]: I0219 15:20:45.007571 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-7lspv" event={"ID":"1224bbe4-6d8e-410e-8990-3813efdd2003","Type":"ContainerStarted","Data":"4492040e017c7e7b949146ade8671463d53259f0ae04d8eb151c438342128a85"} Feb 19 15:20:45 crc kubenswrapper[4810]: I0219 15:20:45.010319 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-fwh4x" event={"ID":"02206e32-6f49-407e-a02b-ce61e3daabf6","Type":"ContainerStarted","Data":"f81ea8ac77dddbe786b7c05794b0649b0364255186fe2d40d0ad0b0100e5ad04"} Feb 19 15:20:49 crc kubenswrapper[4810]: I0219 15:20:49.049052 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x4csq" event={"ID":"54e755f0-9c2f-4d47-9979-b7b92996bab6","Type":"ContainerStarted","Data":"32a1a5f24be62818a31943591cd66d0df2271978a9ec37db4bca7012dc876bb1"} Feb 19 15:20:49 crc kubenswrapper[4810]: I0219 15:20:49.051493 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-7lspv" event={"ID":"1224bbe4-6d8e-410e-8990-3813efdd2003","Type":"ContainerStarted","Data":"f7f7d6675897c68de79ddd0d947a436e8753b1a7d6278661a00a72ce0eb1817b"} Feb 19 15:20:49 crc kubenswrapper[4810]: I0219 15:20:49.051821 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-7lspv" Feb 19 15:20:49 crc kubenswrapper[4810]: I0219 15:20:49.052645 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-fwh4x" event={"ID":"02206e32-6f49-407e-a02b-ce61e3daabf6","Type":"ContainerStarted","Data":"249caeb282aa20b7841be3cdf691e55edd7b79f0db132244419e3bfc63afab84"} Feb 19 15:20:49 crc kubenswrapper[4810]: I0219 15:20:49.066289 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x4csq" podStartSLOduration=1.916233095 podStartE2EDuration="6.066268055s" podCreationTimestamp="2026-02-19 15:20:43 +0000 UTC" firstStartedPulling="2026-02-19 15:20:44.492585031 +0000 UTC m=+673.974615155" lastFinishedPulling="2026-02-19 15:20:48.642619981 +0000 UTC m=+678.124650115" observedRunningTime="2026-02-19 15:20:49.065967878 +0000 UTC m=+678.547998022" watchObservedRunningTime="2026-02-19 15:20:49.066268055 +0000 UTC m=+678.548298179" Feb 19 15:20:49 crc kubenswrapper[4810]: I0219 15:20:49.096225 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-7lspv" podStartSLOduration=2.013537558 podStartE2EDuration="6.096202415s" podCreationTimestamp="2026-02-19 15:20:43 +0000 UTC" firstStartedPulling="2026-02-19 15:20:44.558805056 +0000 UTC m=+674.040835180" lastFinishedPulling="2026-02-19 15:20:48.641469913 +0000 UTC m=+678.123500037" observedRunningTime="2026-02-19 15:20:49.091796207 +0000 UTC m=+678.573826331" watchObservedRunningTime="2026-02-19 15:20:49.096202415 +0000 UTC m=+678.578232539" Feb 19 15:20:49 crc kubenswrapper[4810]: I0219 15:20:49.107521 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-fwh4x" podStartSLOduration=1.893134662 podStartE2EDuration="6.10750808s" podCreationTimestamp="2026-02-19 15:20:43 +0000 UTC" firstStartedPulling="2026-02-19 15:20:44.528406524 +0000 UTC m=+674.010436638" lastFinishedPulling="2026-02-19 15:20:48.742779932 +0000 UTC m=+678.224810056" observedRunningTime="2026-02-19 15:20:49.105804929 +0000 UTC m=+678.587835053" watchObservedRunningTime="2026-02-19 15:20:49.10750808 +0000 UTC m=+678.589538204" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.308391 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-7lspv" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.329929 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8k7p5"] Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.331123 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovn-controller" containerID="cri-o://5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459" gracePeriod=30 Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.331393 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d" gracePeriod=30 Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.331527 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="kube-rbac-proxy-node" containerID="cri-o://6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1" gracePeriod=30 Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.331187 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="northd" containerID="cri-o://0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b" gracePeriod=30 Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.331531 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="sbdb" containerID="cri-o://e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6" gracePeriod=30 Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.331790 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovn-acl-logging" containerID="cri-o://702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663" gracePeriod=30 Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.331854 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="nbdb" containerID="cri-o://5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92" gracePeriod=30 Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.374398 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" containerID="cri-o://07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b" gracePeriod=30 Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.633549 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/3.log" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.636716 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovn-acl-logging/0.log" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.637575 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovn-controller/0.log" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.638037 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.702884 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8zf4d"] Feb 19 15:20:54 crc kubenswrapper[4810]: E0219 15:20:54.703186 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovn-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703215 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovn-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: E0219 15:20:54.703234 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703246 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: E0219 15:20:54.703261 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703274 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 15:20:54 crc kubenswrapper[4810]: E0219 15:20:54.703299 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="northd" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703311 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="northd" Feb 19 15:20:54 crc kubenswrapper[4810]: E0219 15:20:54.703324 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703361 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: E0219 15:20:54.703375 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="kubecfg-setup" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703387 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="kubecfg-setup" Feb 19 15:20:54 crc kubenswrapper[4810]: E0219 15:20:54.703413 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="kube-rbac-proxy-node" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703425 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="kube-rbac-proxy-node" Feb 19 15:20:54 crc kubenswrapper[4810]: E0219 15:20:54.703441 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="sbdb" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703453 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="sbdb" Feb 19 15:20:54 crc kubenswrapper[4810]: E0219 15:20:54.703472 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="nbdb" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703483 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="nbdb" Feb 19 15:20:54 crc kubenswrapper[4810]: E0219 15:20:54.703498 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703509 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: E0219 15:20:54.703525 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovn-acl-logging" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703537 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovn-acl-logging" Feb 19 15:20:54 crc kubenswrapper[4810]: E0219 15:20:54.703551 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703563 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703716 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="kube-rbac-proxy-node" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703736 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703749 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovn-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703767 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="nbdb" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703783 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703801 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703814 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="sbdb" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703833 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703848 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="northd" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703862 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.703880 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovn-acl-logging" Feb 19 15:20:54 crc kubenswrapper[4810]: E0219 15:20:54.704077 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.704103 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.704264 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerName="ovnkube-controller" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.707145 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762396 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-ovn\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762449 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-slash\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762471 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-log-socket\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762500 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762546 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-env-overrides\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762570 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovnkube-config\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762595 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovn-node-metrics-cert\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762555 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762625 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-etc-openvswitch\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762649 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-var-lib-openvswitch\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762666 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-run-netns\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762664 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762691 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-systemd-units\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762704 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-slash" (OuterVolumeSpecName: "host-slash") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762714 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-kubelet\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762769 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762789 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovnkube-script-lib\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762843 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-run-ovn-kubernetes\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762877 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-systemd\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762915 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-openvswitch\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.762948 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-cni-netd\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763001 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7xqv\" (UniqueName: \"kubernetes.io/projected/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-kube-api-access-v7xqv\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763031 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-node-log\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763078 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-cni-bin\") pod \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\" (UID: \"c5a8a15c-53e8-4868-8feb-dcd4e83939a4\") " Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763192 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763288 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763293 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763372 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763413 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-node-log" (OuterVolumeSpecName: "node-log") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763687 4810 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763729 4810 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763748 4810 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763766 4810 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763783 4810 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763802 4810 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763696 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763720 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763743 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-log-socket" (OuterVolumeSpecName: "log-socket") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763762 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763775 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763793 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.763876 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.764132 4810 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.764157 4810 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.764172 4810 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.764341 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.768693 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.770674 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-kube-api-access-v7xqv" (OuterVolumeSpecName: "kube-api-access-v7xqv") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "kube-api-access-v7xqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.776980 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c5a8a15c-53e8-4868-8feb-dcd4e83939a4" (UID: "c5a8a15c-53e8-4868-8feb-dcd4e83939a4"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869163 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-kubelet\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869247 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-run-systemd\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869283 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvfdt\" (UniqueName: \"kubernetes.io/projected/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-kube-api-access-xvfdt\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869395 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-run-openvswitch\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869431 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-ovn-node-metrics-cert\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869464 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-run-netns\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869504 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-run-ovn\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869534 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-cni-netd\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869574 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-slash\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869604 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-ovnkube-config\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869652 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-log-socket\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869682 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-env-overrides\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869723 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-etc-openvswitch\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869753 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-systemd-units\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869792 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-var-lib-openvswitch\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869829 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-run-ovn-kubernetes\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869869 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-node-log\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869904 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-cni-bin\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869952 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.869995 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-ovnkube-script-lib\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.870166 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7xqv\" (UniqueName: \"kubernetes.io/projected/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-kube-api-access-v7xqv\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.870191 4810 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.870211 4810 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.870231 4810 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.870249 4810 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.870268 4810 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.870286 4810 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.870305 4810 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.870345 4810 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.870364 4810 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.870381 4810 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c5a8a15c-53e8-4868-8feb-dcd4e83939a4-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971318 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-cni-bin\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971433 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971439 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-cni-bin\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971487 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-ovnkube-script-lib\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971528 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971539 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-kubelet\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971580 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvfdt\" (UniqueName: \"kubernetes.io/projected/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-kube-api-access-xvfdt\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971610 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-run-systemd\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971644 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-run-openvswitch\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971671 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-ovn-node-metrics-cert\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971684 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-kubelet\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971701 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-run-netns\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971751 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-run-netns\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971755 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-run-systemd\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971794 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-run-ovn\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971845 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-cni-netd\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971846 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-run-openvswitch\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971894 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-slash\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971940 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-ovnkube-config\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971948 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-run-ovn\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971998 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-log-socket\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972016 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-slash\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972043 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-env-overrides\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972084 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-log-socket\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972117 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-etc-openvswitch\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972164 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-systemd-units\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972192 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-etc-openvswitch\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972222 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-var-lib-openvswitch\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972243 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-systemd-units\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972371 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-var-lib-openvswitch\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972316 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-run-ovn-kubernetes\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972425 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-run-ovn-kubernetes\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972446 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-node-log\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972475 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-node-log\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.971965 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-host-cni-netd\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972935 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-ovnkube-script-lib\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.972997 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-env-overrides\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.973097 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-ovnkube-config\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.976229 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-ovn-node-metrics-cert\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:54 crc kubenswrapper[4810]: I0219 15:20:54.990303 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvfdt\" (UniqueName: \"kubernetes.io/projected/6ac1d960-0c17-4a48-8944-3eb4bb640ddf-kube-api-access-xvfdt\") pod \"ovnkube-node-8zf4d\" (UID: \"6ac1d960-0c17-4a48-8944-3eb4bb640ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.023161 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.090247 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovnkube-controller/3.log" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.093833 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovn-acl-logging/0.log" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.094775 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8k7p5_c5a8a15c-53e8-4868-8feb-dcd4e83939a4/ovn-controller/0.log" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095259 4810 generic.go:334] "Generic (PLEG): container finished" podID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerID="07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b" exitCode=0 Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095298 4810 generic.go:334] "Generic (PLEG): container finished" podID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerID="e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6" exitCode=0 Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095316 4810 generic.go:334] "Generic (PLEG): container finished" podID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerID="5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92" exitCode=0 Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095357 4810 generic.go:334] "Generic (PLEG): container finished" podID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerID="0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b" exitCode=0 Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095376 4810 generic.go:334] "Generic (PLEG): container finished" podID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerID="879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d" exitCode=0 Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095394 4810 generic.go:334] "Generic (PLEG): container finished" podID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerID="6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1" exitCode=0 Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095411 4810 generic.go:334] "Generic (PLEG): container finished" podID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerID="702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663" exitCode=143 Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095430 4810 generic.go:334] "Generic (PLEG): container finished" podID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" containerID="5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459" exitCode=143 Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095500 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095486 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095688 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095736 4810 scope.go:117] "RemoveContainer" containerID="07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095767 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095794 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095847 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095867 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095889 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095936 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095948 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095960 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095972 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.095982 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096021 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096035 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096045 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096060 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096077 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096120 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096130 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096141 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096153 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096163 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096203 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096213 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096224 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096235 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096251 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096291 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096304 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096315 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096364 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096377 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096388 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096400 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096414 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096463 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096475 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096490 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8k7p5" event={"ID":"c5a8a15c-53e8-4868-8feb-dcd4e83939a4","Type":"ContainerDied","Data":"18f9a8e62a518ac5ed0415309aa8b6acc10ac0aef8f801aae58cbe85d9127027"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096508 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096551 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096562 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096572 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096583 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096594 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096633 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096644 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096656 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.096666 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.098573 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" event={"ID":"6ac1d960-0c17-4a48-8944-3eb4bb640ddf","Type":"ContainerStarted","Data":"935c0ff3f82d9b767de715fd41fc9226da2a899a73c7af7bdb5c3326cb019546"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.100636 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bsztz_2a45a199-beeb-4972-b796-15c958fe99d3/kube-multus/2.log" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.101213 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bsztz_2a45a199-beeb-4972-b796-15c958fe99d3/kube-multus/1.log" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.101265 4810 generic.go:334] "Generic (PLEG): container finished" podID="2a45a199-beeb-4972-b796-15c958fe99d3" containerID="92e30886e49380d7d876397116f7db4e85388275b36d2f8ee0ab84b9167f3dde" exitCode=2 Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.101297 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bsztz" event={"ID":"2a45a199-beeb-4972-b796-15c958fe99d3","Type":"ContainerDied","Data":"92e30886e49380d7d876397116f7db4e85388275b36d2f8ee0ab84b9167f3dde"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.101322 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1f575901c6e6174387489f4f3a7d8cfa46089db51556b6acc9245faaf36a9ca"} Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.102042 4810 scope.go:117] "RemoveContainer" containerID="92e30886e49380d7d876397116f7db4e85388275b36d2f8ee0ab84b9167f3dde" Feb 19 15:20:55 crc kubenswrapper[4810]: E0219 15:20:55.102538 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-bsztz_openshift-multus(2a45a199-beeb-4972-b796-15c958fe99d3)\"" pod="openshift-multus/multus-bsztz" podUID="2a45a199-beeb-4972-b796-15c958fe99d3" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.126597 4810 scope.go:117] "RemoveContainer" containerID="824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.157084 4810 scope.go:117] "RemoveContainer" containerID="e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.179730 4810 scope.go:117] "RemoveContainer" containerID="5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.185284 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8k7p5"] Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.193069 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8k7p5"] Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.195425 4810 scope.go:117] "RemoveContainer" containerID="0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.252484 4810 scope.go:117] "RemoveContainer" containerID="879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.281286 4810 scope.go:117] "RemoveContainer" containerID="6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.297229 4810 scope.go:117] "RemoveContainer" containerID="702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.310291 4810 scope.go:117] "RemoveContainer" containerID="5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.324077 4810 scope.go:117] "RemoveContainer" containerID="3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.337072 4810 scope.go:117] "RemoveContainer" containerID="07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b" Feb 19 15:20:55 crc kubenswrapper[4810]: E0219 15:20:55.337615 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b\": container with ID starting with 07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b not found: ID does not exist" containerID="07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.337645 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b"} err="failed to get container status \"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b\": rpc error: code = NotFound desc = could not find container \"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b\": container with ID starting with 07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.337665 4810 scope.go:117] "RemoveContainer" containerID="824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4" Feb 19 15:20:55 crc kubenswrapper[4810]: E0219 15:20:55.337866 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\": container with ID starting with 824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4 not found: ID does not exist" containerID="824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.337889 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4"} err="failed to get container status \"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\": rpc error: code = NotFound desc = could not find container \"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\": container with ID starting with 824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.337903 4810 scope.go:117] "RemoveContainer" containerID="e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6" Feb 19 15:20:55 crc kubenswrapper[4810]: E0219 15:20:55.338343 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\": container with ID starting with e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6 not found: ID does not exist" containerID="e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.338429 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6"} err="failed to get container status \"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\": rpc error: code = NotFound desc = could not find container \"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\": container with ID starting with e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.338478 4810 scope.go:117] "RemoveContainer" containerID="5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92" Feb 19 15:20:55 crc kubenswrapper[4810]: E0219 15:20:55.338820 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\": container with ID starting with 5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92 not found: ID does not exist" containerID="5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.338848 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92"} err="failed to get container status \"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\": rpc error: code = NotFound desc = could not find container \"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\": container with ID starting with 5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.338859 4810 scope.go:117] "RemoveContainer" containerID="0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b" Feb 19 15:20:55 crc kubenswrapper[4810]: E0219 15:20:55.339147 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\": container with ID starting with 0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b not found: ID does not exist" containerID="0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.339186 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b"} err="failed to get container status \"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\": rpc error: code = NotFound desc = could not find container \"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\": container with ID starting with 0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.339207 4810 scope.go:117] "RemoveContainer" containerID="879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d" Feb 19 15:20:55 crc kubenswrapper[4810]: E0219 15:20:55.339532 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\": container with ID starting with 879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d not found: ID does not exist" containerID="879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.339553 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d"} err="failed to get container status \"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\": rpc error: code = NotFound desc = could not find container \"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\": container with ID starting with 879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.339565 4810 scope.go:117] "RemoveContainer" containerID="6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1" Feb 19 15:20:55 crc kubenswrapper[4810]: E0219 15:20:55.339760 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\": container with ID starting with 6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1 not found: ID does not exist" containerID="6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.339780 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1"} err="failed to get container status \"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\": rpc error: code = NotFound desc = could not find container \"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\": container with ID starting with 6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.339792 4810 scope.go:117] "RemoveContainer" containerID="702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663" Feb 19 15:20:55 crc kubenswrapper[4810]: E0219 15:20:55.340026 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\": container with ID starting with 702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663 not found: ID does not exist" containerID="702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.340056 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663"} err="failed to get container status \"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\": rpc error: code = NotFound desc = could not find container \"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\": container with ID starting with 702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.340076 4810 scope.go:117] "RemoveContainer" containerID="5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459" Feb 19 15:20:55 crc kubenswrapper[4810]: E0219 15:20:55.340342 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\": container with ID starting with 5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459 not found: ID does not exist" containerID="5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.340365 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459"} err="failed to get container status \"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\": rpc error: code = NotFound desc = could not find container \"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\": container with ID starting with 5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.340379 4810 scope.go:117] "RemoveContainer" containerID="3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c" Feb 19 15:20:55 crc kubenswrapper[4810]: E0219 15:20:55.340567 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\": container with ID starting with 3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c not found: ID does not exist" containerID="3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.340589 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c"} err="failed to get container status \"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\": rpc error: code = NotFound desc = could not find container \"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\": container with ID starting with 3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.340601 4810 scope.go:117] "RemoveContainer" containerID="07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.340871 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b"} err="failed to get container status \"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b\": rpc error: code = NotFound desc = could not find container \"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b\": container with ID starting with 07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.340900 4810 scope.go:117] "RemoveContainer" containerID="824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.341152 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4"} err="failed to get container status \"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\": rpc error: code = NotFound desc = could not find container \"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\": container with ID starting with 824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.341173 4810 scope.go:117] "RemoveContainer" containerID="e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.341469 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6"} err="failed to get container status \"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\": rpc error: code = NotFound desc = could not find container \"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\": container with ID starting with e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.341505 4810 scope.go:117] "RemoveContainer" containerID="5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.341865 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92"} err="failed to get container status \"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\": rpc error: code = NotFound desc = could not find container \"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\": container with ID starting with 5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.341888 4810 scope.go:117] "RemoveContainer" containerID="0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.342052 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b"} err="failed to get container status \"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\": rpc error: code = NotFound desc = could not find container \"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\": container with ID starting with 0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.342070 4810 scope.go:117] "RemoveContainer" containerID="879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.342297 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d"} err="failed to get container status \"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\": rpc error: code = NotFound desc = could not find container \"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\": container with ID starting with 879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.342362 4810 scope.go:117] "RemoveContainer" containerID="6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.342674 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1"} err="failed to get container status \"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\": rpc error: code = NotFound desc = could not find container \"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\": container with ID starting with 6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.342712 4810 scope.go:117] "RemoveContainer" containerID="702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.342983 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663"} err="failed to get container status \"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\": rpc error: code = NotFound desc = could not find container \"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\": container with ID starting with 702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.343002 4810 scope.go:117] "RemoveContainer" containerID="5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.343197 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459"} err="failed to get container status \"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\": rpc error: code = NotFound desc = could not find container \"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\": container with ID starting with 5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.343215 4810 scope.go:117] "RemoveContainer" containerID="3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.343537 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c"} err="failed to get container status \"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\": rpc error: code = NotFound desc = could not find container \"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\": container with ID starting with 3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.343599 4810 scope.go:117] "RemoveContainer" containerID="07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.343900 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b"} err="failed to get container status \"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b\": rpc error: code = NotFound desc = could not find container \"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b\": container with ID starting with 07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.343918 4810 scope.go:117] "RemoveContainer" containerID="824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.344215 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4"} err="failed to get container status \"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\": rpc error: code = NotFound desc = could not find container \"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\": container with ID starting with 824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.344274 4810 scope.go:117] "RemoveContainer" containerID="e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.344555 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6"} err="failed to get container status \"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\": rpc error: code = NotFound desc = could not find container \"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\": container with ID starting with e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.344574 4810 scope.go:117] "RemoveContainer" containerID="5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.344730 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92"} err="failed to get container status \"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\": rpc error: code = NotFound desc = could not find container \"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\": container with ID starting with 5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.344749 4810 scope.go:117] "RemoveContainer" containerID="0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.344924 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b"} err="failed to get container status \"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\": rpc error: code = NotFound desc = could not find container \"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\": container with ID starting with 0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.344954 4810 scope.go:117] "RemoveContainer" containerID="879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.345167 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d"} err="failed to get container status \"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\": rpc error: code = NotFound desc = could not find container \"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\": container with ID starting with 879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.345186 4810 scope.go:117] "RemoveContainer" containerID="6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.345481 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1"} err="failed to get container status \"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\": rpc error: code = NotFound desc = could not find container \"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\": container with ID starting with 6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.345545 4810 scope.go:117] "RemoveContainer" containerID="702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.345866 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663"} err="failed to get container status \"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\": rpc error: code = NotFound desc = could not find container \"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\": container with ID starting with 702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.345896 4810 scope.go:117] "RemoveContainer" containerID="5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.346109 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459"} err="failed to get container status \"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\": rpc error: code = NotFound desc = could not find container \"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\": container with ID starting with 5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.346139 4810 scope.go:117] "RemoveContainer" containerID="3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.346399 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c"} err="failed to get container status \"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\": rpc error: code = NotFound desc = could not find container \"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\": container with ID starting with 3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.346426 4810 scope.go:117] "RemoveContainer" containerID="07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.346705 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b"} err="failed to get container status \"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b\": rpc error: code = NotFound desc = could not find container \"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b\": container with ID starting with 07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.346740 4810 scope.go:117] "RemoveContainer" containerID="824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.347021 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4"} err="failed to get container status \"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\": rpc error: code = NotFound desc = could not find container \"824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4\": container with ID starting with 824c8634103fa360f2c3a6cb4c22c0185f87e2de6475beadb412027255de2cd4 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.347043 4810 scope.go:117] "RemoveContainer" containerID="e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.347270 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6"} err="failed to get container status \"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\": rpc error: code = NotFound desc = could not find container \"e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6\": container with ID starting with e21fe2d6964872913c8e5a2f983519806b72d0e2880016372ac90fc2b9f61fc6 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.347307 4810 scope.go:117] "RemoveContainer" containerID="5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.347549 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92"} err="failed to get container status \"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\": rpc error: code = NotFound desc = could not find container \"5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92\": container with ID starting with 5de90aae0b44ad98711b4e4c2764cf25c491a329a7acbe9a6ec49f88fb8e7c92 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.347566 4810 scope.go:117] "RemoveContainer" containerID="0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.347757 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b"} err="failed to get container status \"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\": rpc error: code = NotFound desc = could not find container \"0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b\": container with ID starting with 0c0568b462e1168949dcfd0131ef4e94d6c3f68dd31906ee5c6a8c87f7fef25b not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.347811 4810 scope.go:117] "RemoveContainer" containerID="879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.348051 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d"} err="failed to get container status \"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\": rpc error: code = NotFound desc = could not find container \"879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d\": container with ID starting with 879052e78b8d383d0a7f4fbddb38b98fa931bec3d0d0c7ff7eb830ccb62be46d not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.348080 4810 scope.go:117] "RemoveContainer" containerID="6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.348395 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1"} err="failed to get container status \"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\": rpc error: code = NotFound desc = could not find container \"6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1\": container with ID starting with 6dcca2640a7625737b151a65d63dac889139b41a8dff1cd5aae2ead1a37138f1 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.348455 4810 scope.go:117] "RemoveContainer" containerID="702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.348699 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663"} err="failed to get container status \"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\": rpc error: code = NotFound desc = could not find container \"702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663\": container with ID starting with 702a868ef507f31cb27f41ccd862f38ba8d5256e5d4af19a5d8b168939932663 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.348746 4810 scope.go:117] "RemoveContainer" containerID="5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.349004 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459"} err="failed to get container status \"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\": rpc error: code = NotFound desc = could not find container \"5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459\": container with ID starting with 5089067acda7084123f239150916500b11a0547068b0a39ee60a467c372f3459 not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.349055 4810 scope.go:117] "RemoveContainer" containerID="3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.349456 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c"} err="failed to get container status \"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\": rpc error: code = NotFound desc = could not find container \"3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c\": container with ID starting with 3924ff75e9c9ac70988594877cb4d8bf52d6491e6335ed7cf78f43fc79728f7c not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.349492 4810 scope.go:117] "RemoveContainer" containerID="07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.349825 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b"} err="failed to get container status \"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b\": rpc error: code = NotFound desc = could not find container \"07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b\": container with ID starting with 07a16a020eab7c7e2c4939aceb3c9448061dd4b2196bce8720223a0590498c5b not found: ID does not exist" Feb 19 15:20:55 crc kubenswrapper[4810]: I0219 15:20:55.449735 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a8a15c-53e8-4868-8feb-dcd4e83939a4" path="/var/lib/kubelet/pods/c5a8a15c-53e8-4868-8feb-dcd4e83939a4/volumes" Feb 19 15:20:56 crc kubenswrapper[4810]: I0219 15:20:56.112460 4810 generic.go:334] "Generic (PLEG): container finished" podID="6ac1d960-0c17-4a48-8944-3eb4bb640ddf" containerID="d1b7ed0186665eaf4b51010ff23f296cb8672043c98e5557b6cad2f3d8297654" exitCode=0 Feb 19 15:20:56 crc kubenswrapper[4810]: I0219 15:20:56.112514 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" event={"ID":"6ac1d960-0c17-4a48-8944-3eb4bb640ddf","Type":"ContainerDied","Data":"d1b7ed0186665eaf4b51010ff23f296cb8672043c98e5557b6cad2f3d8297654"} Feb 19 15:20:57 crc kubenswrapper[4810]: I0219 15:20:57.123108 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" event={"ID":"6ac1d960-0c17-4a48-8944-3eb4bb640ddf","Type":"ContainerStarted","Data":"acc4fe8a88eec0620e636c35cf7351ef642a53d545ccb338729ff194598def20"} Feb 19 15:20:57 crc kubenswrapper[4810]: I0219 15:20:57.123720 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" event={"ID":"6ac1d960-0c17-4a48-8944-3eb4bb640ddf","Type":"ContainerStarted","Data":"cd15a8f088bd89fc80b17bdfbb326dfc6c27d72a7465ea9239114a0c687f8406"} Feb 19 15:20:57 crc kubenswrapper[4810]: I0219 15:20:57.123735 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" event={"ID":"6ac1d960-0c17-4a48-8944-3eb4bb640ddf","Type":"ContainerStarted","Data":"2006d3c679333961875a1a4fee973d25f7e9e912685b913d369d8756d9b2db6b"} Feb 19 15:20:57 crc kubenswrapper[4810]: I0219 15:20:57.123750 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" event={"ID":"6ac1d960-0c17-4a48-8944-3eb4bb640ddf","Type":"ContainerStarted","Data":"0c3f881dadd35095fbcb646578d8e44a30f4321edc52312452e8b65980ddc196"} Feb 19 15:20:57 crc kubenswrapper[4810]: I0219 15:20:57.123762 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" event={"ID":"6ac1d960-0c17-4a48-8944-3eb4bb640ddf","Type":"ContainerStarted","Data":"32493140e07e5c14b6b8da2f78426c828bdcab6767d59d249ad4c7f8100eac7e"} Feb 19 15:20:57 crc kubenswrapper[4810]: I0219 15:20:57.123773 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" event={"ID":"6ac1d960-0c17-4a48-8944-3eb4bb640ddf","Type":"ContainerStarted","Data":"1aa22282c45b428229c7c21075eb2eb4d7eb514f85c55fbf2f0e7ba9b365f047"} Feb 19 15:20:59 crc kubenswrapper[4810]: I0219 15:20:59.139695 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" event={"ID":"6ac1d960-0c17-4a48-8944-3eb4bb640ddf","Type":"ContainerStarted","Data":"1bc3bee891abfe3c8b65203eac01828dd1fd03d0c253692df3b35cc9b3afcd93"} Feb 19 15:21:02 crc kubenswrapper[4810]: I0219 15:21:02.163796 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" event={"ID":"6ac1d960-0c17-4a48-8944-3eb4bb640ddf","Type":"ContainerStarted","Data":"a6c36c4db41254418cf783c07128f03be371060e1445541ae072454a7b283562"} Feb 19 15:21:02 crc kubenswrapper[4810]: I0219 15:21:02.164385 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:21:02 crc kubenswrapper[4810]: I0219 15:21:02.164404 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:21:02 crc kubenswrapper[4810]: I0219 15:21:02.190876 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" podStartSLOduration=8.190846448 podStartE2EDuration="8.190846448s" podCreationTimestamp="2026-02-19 15:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:21:02.189243769 +0000 UTC m=+691.671273883" watchObservedRunningTime="2026-02-19 15:21:02.190846448 +0000 UTC m=+691.672876582" Feb 19 15:21:02 crc kubenswrapper[4810]: I0219 15:21:02.192354 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:21:03 crc kubenswrapper[4810]: I0219 15:21:03.168878 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:21:03 crc kubenswrapper[4810]: I0219 15:21:03.228078 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:21:06 crc kubenswrapper[4810]: I0219 15:21:06.439470 4810 scope.go:117] "RemoveContainer" containerID="92e30886e49380d7d876397116f7db4e85388275b36d2f8ee0ab84b9167f3dde" Feb 19 15:21:06 crc kubenswrapper[4810]: E0219 15:21:06.440252 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-bsztz_openshift-multus(2a45a199-beeb-4972-b796-15c958fe99d3)\"" pod="openshift-multus/multus-bsztz" podUID="2a45a199-beeb-4972-b796-15c958fe99d3" Feb 19 15:21:17 crc kubenswrapper[4810]: I0219 15:21:17.439267 4810 scope.go:117] "RemoveContainer" containerID="92e30886e49380d7d876397116f7db4e85388275b36d2f8ee0ab84b9167f3dde" Feb 19 15:21:18 crc kubenswrapper[4810]: I0219 15:21:18.288901 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bsztz_2a45a199-beeb-4972-b796-15c958fe99d3/kube-multus/2.log" Feb 19 15:21:18 crc kubenswrapper[4810]: I0219 15:21:18.290220 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bsztz_2a45a199-beeb-4972-b796-15c958fe99d3/kube-multus/1.log" Feb 19 15:21:18 crc kubenswrapper[4810]: I0219 15:21:18.290379 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bsztz" event={"ID":"2a45a199-beeb-4972-b796-15c958fe99d3","Type":"ContainerStarted","Data":"db0895596172526edd308f9ec4aeaa3e98f20758dd6d805914c4a9b2c94a0568"} Feb 19 15:21:19 crc kubenswrapper[4810]: I0219 15:21:19.538184 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:21:19 crc kubenswrapper[4810]: I0219 15:21:19.538725 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.204826 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z"] Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.208838 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.211837 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.223297 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z"] Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.342259 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e54510d9-7d24-47bb-a55e-b50e7cff9fba-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z\" (UID: \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.342971 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e54510d9-7d24-47bb-a55e-b50e7cff9fba-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z\" (UID: \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.343147 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxsss\" (UniqueName: \"kubernetes.io/projected/e54510d9-7d24-47bb-a55e-b50e7cff9fba-kube-api-access-cxsss\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z\" (UID: \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.444825 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e54510d9-7d24-47bb-a55e-b50e7cff9fba-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z\" (UID: \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.444914 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e54510d9-7d24-47bb-a55e-b50e7cff9fba-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z\" (UID: \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.445139 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxsss\" (UniqueName: \"kubernetes.io/projected/e54510d9-7d24-47bb-a55e-b50e7cff9fba-kube-api-access-cxsss\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z\" (UID: \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.445661 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e54510d9-7d24-47bb-a55e-b50e7cff9fba-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z\" (UID: \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.445692 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e54510d9-7d24-47bb-a55e-b50e7cff9fba-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z\" (UID: \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.472185 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxsss\" (UniqueName: \"kubernetes.io/projected/e54510d9-7d24-47bb-a55e-b50e7cff9fba-kube-api-access-cxsss\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z\" (UID: \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.534309 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" Feb 19 15:21:22 crc kubenswrapper[4810]: I0219 15:21:22.832963 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z"] Feb 19 15:21:22 crc kubenswrapper[4810]: W0219 15:21:22.839196 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode54510d9_7d24_47bb_a55e_b50e7cff9fba.slice/crio-4c62b8022bb32b78bafdeb6d825b4518711a45a07ca451f3468bb51c95de1ff1 WatchSource:0}: Error finding container 4c62b8022bb32b78bafdeb6d825b4518711a45a07ca451f3468bb51c95de1ff1: Status 404 returned error can't find the container with id 4c62b8022bb32b78bafdeb6d825b4518711a45a07ca451f3468bb51c95de1ff1 Feb 19 15:21:23 crc kubenswrapper[4810]: I0219 15:21:23.324970 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" event={"ID":"e54510d9-7d24-47bb-a55e-b50e7cff9fba","Type":"ContainerStarted","Data":"f2754ed0bd782f0cfef1eee1c1af5d1b4775d6b8194b486514baacb9316e5241"} Feb 19 15:21:23 crc kubenswrapper[4810]: I0219 15:21:23.325004 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" event={"ID":"e54510d9-7d24-47bb-a55e-b50e7cff9fba","Type":"ContainerStarted","Data":"4c62b8022bb32b78bafdeb6d825b4518711a45a07ca451f3468bb51c95de1ff1"} Feb 19 15:21:25 crc kubenswrapper[4810]: I0219 15:21:25.052131 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8zf4d" Feb 19 15:21:25 crc kubenswrapper[4810]: I0219 15:21:25.340574 4810 generic.go:334] "Generic (PLEG): container finished" podID="e54510d9-7d24-47bb-a55e-b50e7cff9fba" containerID="f2754ed0bd782f0cfef1eee1c1af5d1b4775d6b8194b486514baacb9316e5241" exitCode=0 Feb 19 15:21:25 crc kubenswrapper[4810]: I0219 15:21:25.340623 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" event={"ID":"e54510d9-7d24-47bb-a55e-b50e7cff9fba","Type":"ContainerDied","Data":"f2754ed0bd782f0cfef1eee1c1af5d1b4775d6b8194b486514baacb9316e5241"} Feb 19 15:21:27 crc kubenswrapper[4810]: I0219 15:21:27.356446 4810 generic.go:334] "Generic (PLEG): container finished" podID="e54510d9-7d24-47bb-a55e-b50e7cff9fba" containerID="602e608b65f405d3564f6623182522db988bda7619faf3bb81281274f4fee755" exitCode=0 Feb 19 15:21:27 crc kubenswrapper[4810]: I0219 15:21:27.356559 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" event={"ID":"e54510d9-7d24-47bb-a55e-b50e7cff9fba","Type":"ContainerDied","Data":"602e608b65f405d3564f6623182522db988bda7619faf3bb81281274f4fee755"} Feb 19 15:21:28 crc kubenswrapper[4810]: I0219 15:21:28.370098 4810 generic.go:334] "Generic (PLEG): container finished" podID="e54510d9-7d24-47bb-a55e-b50e7cff9fba" containerID="62f6b9037cd3effdb72857382a8084eec734578a33ebfe0a5303ab318003d68c" exitCode=0 Feb 19 15:21:28 crc kubenswrapper[4810]: I0219 15:21:28.370405 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" event={"ID":"e54510d9-7d24-47bb-a55e-b50e7cff9fba","Type":"ContainerDied","Data":"62f6b9037cd3effdb72857382a8084eec734578a33ebfe0a5303ab318003d68c"} Feb 19 15:21:29 crc kubenswrapper[4810]: I0219 15:21:29.728908 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" Feb 19 15:21:29 crc kubenswrapper[4810]: I0219 15:21:29.866119 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxsss\" (UniqueName: \"kubernetes.io/projected/e54510d9-7d24-47bb-a55e-b50e7cff9fba-kube-api-access-cxsss\") pod \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\" (UID: \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\") " Feb 19 15:21:29 crc kubenswrapper[4810]: I0219 15:21:29.866262 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e54510d9-7d24-47bb-a55e-b50e7cff9fba-util\") pod \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\" (UID: \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\") " Feb 19 15:21:29 crc kubenswrapper[4810]: I0219 15:21:29.866306 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e54510d9-7d24-47bb-a55e-b50e7cff9fba-bundle\") pod \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\" (UID: \"e54510d9-7d24-47bb-a55e-b50e7cff9fba\") " Feb 19 15:21:29 crc kubenswrapper[4810]: I0219 15:21:29.870500 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e54510d9-7d24-47bb-a55e-b50e7cff9fba-bundle" (OuterVolumeSpecName: "bundle") pod "e54510d9-7d24-47bb-a55e-b50e7cff9fba" (UID: "e54510d9-7d24-47bb-a55e-b50e7cff9fba"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:21:29 crc kubenswrapper[4810]: I0219 15:21:29.872891 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54510d9-7d24-47bb-a55e-b50e7cff9fba-kube-api-access-cxsss" (OuterVolumeSpecName: "kube-api-access-cxsss") pod "e54510d9-7d24-47bb-a55e-b50e7cff9fba" (UID: "e54510d9-7d24-47bb-a55e-b50e7cff9fba"). InnerVolumeSpecName "kube-api-access-cxsss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:21:29 crc kubenswrapper[4810]: I0219 15:21:29.893128 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e54510d9-7d24-47bb-a55e-b50e7cff9fba-util" (OuterVolumeSpecName: "util") pod "e54510d9-7d24-47bb-a55e-b50e7cff9fba" (UID: "e54510d9-7d24-47bb-a55e-b50e7cff9fba"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:21:29 crc kubenswrapper[4810]: I0219 15:21:29.967870 4810 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e54510d9-7d24-47bb-a55e-b50e7cff9fba-util\") on node \"crc\" DevicePath \"\"" Feb 19 15:21:29 crc kubenswrapper[4810]: I0219 15:21:29.967939 4810 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e54510d9-7d24-47bb-a55e-b50e7cff9fba-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:21:29 crc kubenswrapper[4810]: I0219 15:21:29.967966 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxsss\" (UniqueName: \"kubernetes.io/projected/e54510d9-7d24-47bb-a55e-b50e7cff9fba-kube-api-access-cxsss\") on node \"crc\" DevicePath \"\"" Feb 19 15:21:30 crc kubenswrapper[4810]: I0219 15:21:30.388899 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" event={"ID":"e54510d9-7d24-47bb-a55e-b50e7cff9fba","Type":"ContainerDied","Data":"4c62b8022bb32b78bafdeb6d825b4518711a45a07ca451f3468bb51c95de1ff1"} Feb 19 15:21:30 crc kubenswrapper[4810]: I0219 15:21:30.388971 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c62b8022bb32b78bafdeb6d825b4518711a45a07ca451f3468bb51c95de1ff1" Feb 19 15:21:30 crc kubenswrapper[4810]: I0219 15:21:30.389012 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z" Feb 19 15:21:31 crc kubenswrapper[4810]: I0219 15:21:31.908634 4810 scope.go:117] "RemoveContainer" containerID="c1f575901c6e6174387489f4f3a7d8cfa46089db51556b6acc9245faaf36a9ca" Feb 19 15:21:32 crc kubenswrapper[4810]: I0219 15:21:32.403188 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bsztz_2a45a199-beeb-4972-b796-15c958fe99d3/kube-multus/2.log" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.120046 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6jkkz"] Feb 19 15:21:39 crc kubenswrapper[4810]: E0219 15:21:39.120386 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54510d9-7d24-47bb-a55e-b50e7cff9fba" containerName="pull" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.120409 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54510d9-7d24-47bb-a55e-b50e7cff9fba" containerName="pull" Feb 19 15:21:39 crc kubenswrapper[4810]: E0219 15:21:39.120429 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54510d9-7d24-47bb-a55e-b50e7cff9fba" containerName="extract" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.120439 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54510d9-7d24-47bb-a55e-b50e7cff9fba" containerName="extract" Feb 19 15:21:39 crc kubenswrapper[4810]: E0219 15:21:39.120458 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54510d9-7d24-47bb-a55e-b50e7cff9fba" containerName="util" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.120469 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54510d9-7d24-47bb-a55e-b50e7cff9fba" containerName="util" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.120609 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54510d9-7d24-47bb-a55e-b50e7cff9fba" containerName="extract" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.121047 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jkkz" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.124374 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.124919 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-hf4j7" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.125418 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.175720 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6jkkz"] Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.185990 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch6r5\" (UniqueName: \"kubernetes.io/projected/1656f52d-7771-4bbb-9642-b296d16b791e-kube-api-access-ch6r5\") pod \"obo-prometheus-operator-68bc856cb9-6jkkz\" (UID: \"1656f52d-7771-4bbb-9642-b296d16b791e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jkkz" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.257510 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr"] Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.258128 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.259947 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-chvj7" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.260076 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.273157 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr"] Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.284624 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt"] Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.285484 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.291092 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/408628c0-0b2c-48f9-b849-ee1b124499e1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr\" (UID: \"408628c0-0b2c-48f9-b849-ee1b124499e1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.291243 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch6r5\" (UniqueName: \"kubernetes.io/projected/1656f52d-7771-4bbb-9642-b296d16b791e-kube-api-access-ch6r5\") pod \"obo-prometheus-operator-68bc856cb9-6jkkz\" (UID: \"1656f52d-7771-4bbb-9642-b296d16b791e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jkkz" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.291293 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/408628c0-0b2c-48f9-b849-ee1b124499e1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr\" (UID: \"408628c0-0b2c-48f9-b849-ee1b124499e1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.305672 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt"] Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.340258 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch6r5\" (UniqueName: \"kubernetes.io/projected/1656f52d-7771-4bbb-9642-b296d16b791e-kube-api-access-ch6r5\") pod \"obo-prometheus-operator-68bc856cb9-6jkkz\" (UID: \"1656f52d-7771-4bbb-9642-b296d16b791e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jkkz" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.391952 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/408628c0-0b2c-48f9-b849-ee1b124499e1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr\" (UID: \"408628c0-0b2c-48f9-b849-ee1b124499e1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.392007 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5debcf2-9629-4bb2-9133-f4b81748ff7d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt\" (UID: \"d5debcf2-9629-4bb2-9133-f4b81748ff7d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.392042 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5debcf2-9629-4bb2-9133-f4b81748ff7d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt\" (UID: \"d5debcf2-9629-4bb2-9133-f4b81748ff7d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.392099 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/408628c0-0b2c-48f9-b849-ee1b124499e1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr\" (UID: \"408628c0-0b2c-48f9-b849-ee1b124499e1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.404760 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/408628c0-0b2c-48f9-b849-ee1b124499e1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr\" (UID: \"408628c0-0b2c-48f9-b849-ee1b124499e1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.411777 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/408628c0-0b2c-48f9-b849-ee1b124499e1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr\" (UID: \"408628c0-0b2c-48f9-b849-ee1b124499e1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.436728 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-dk9c4"] Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.437407 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.439029 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jkkz" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.439848 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.440036 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-zhpbs" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.471847 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-dk9c4"] Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.493023 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfwlv\" (UniqueName: \"kubernetes.io/projected/8bdf030e-92d8-45dc-ab6c-a7b241444677-kube-api-access-nfwlv\") pod \"observability-operator-59bdc8b94-dk9c4\" (UID: \"8bdf030e-92d8-45dc-ab6c-a7b241444677\") " pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.493123 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5debcf2-9629-4bb2-9133-f4b81748ff7d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt\" (UID: \"d5debcf2-9629-4bb2-9133-f4b81748ff7d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.493176 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5debcf2-9629-4bb2-9133-f4b81748ff7d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt\" (UID: \"d5debcf2-9629-4bb2-9133-f4b81748ff7d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.493196 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8bdf030e-92d8-45dc-ab6c-a7b241444677-observability-operator-tls\") pod \"observability-operator-59bdc8b94-dk9c4\" (UID: \"8bdf030e-92d8-45dc-ab6c-a7b241444677\") " pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.496927 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5debcf2-9629-4bb2-9133-f4b81748ff7d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt\" (UID: \"d5debcf2-9629-4bb2-9133-f4b81748ff7d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.497148 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5debcf2-9629-4bb2-9133-f4b81748ff7d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt\" (UID: \"d5debcf2-9629-4bb2-9133-f4b81748ff7d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.574663 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.594135 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfwlv\" (UniqueName: \"kubernetes.io/projected/8bdf030e-92d8-45dc-ab6c-a7b241444677-kube-api-access-nfwlv\") pod \"observability-operator-59bdc8b94-dk9c4\" (UID: \"8bdf030e-92d8-45dc-ab6c-a7b241444677\") " pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.594622 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8bdf030e-92d8-45dc-ab6c-a7b241444677-observability-operator-tls\") pod \"observability-operator-59bdc8b94-dk9c4\" (UID: \"8bdf030e-92d8-45dc-ab6c-a7b241444677\") " pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.598959 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8bdf030e-92d8-45dc-ab6c-a7b241444677-observability-operator-tls\") pod \"observability-operator-59bdc8b94-dk9c4\" (UID: \"8bdf030e-92d8-45dc-ab6c-a7b241444677\") " pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.613396 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.615532 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfwlv\" (UniqueName: \"kubernetes.io/projected/8bdf030e-92d8-45dc-ab6c-a7b241444677-kube-api-access-nfwlv\") pod \"observability-operator-59bdc8b94-dk9c4\" (UID: \"8bdf030e-92d8-45dc-ab6c-a7b241444677\") " pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.655425 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-2fdxm"] Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.658226 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2fdxm" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.660353 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-575f4" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.670067 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-2fdxm"] Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.692109 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6jkkz"] Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.699852 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5968625-c396-4ae0-9846-c2ceb6baf655-openshift-service-ca\") pod \"perses-operator-5bf474d74f-2fdxm\" (UID: \"c5968625-c396-4ae0-9846-c2ceb6baf655\") " pod="openshift-operators/perses-operator-5bf474d74f-2fdxm" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.699895 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qj8z\" (UniqueName: \"kubernetes.io/projected/c5968625-c396-4ae0-9846-c2ceb6baf655-kube-api-access-4qj8z\") pod \"perses-operator-5bf474d74f-2fdxm\" (UID: \"c5968625-c396-4ae0-9846-c2ceb6baf655\") " pod="openshift-operators/perses-operator-5bf474d74f-2fdxm" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.753628 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.807101 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5968625-c396-4ae0-9846-c2ceb6baf655-openshift-service-ca\") pod \"perses-operator-5bf474d74f-2fdxm\" (UID: \"c5968625-c396-4ae0-9846-c2ceb6baf655\") " pod="openshift-operators/perses-operator-5bf474d74f-2fdxm" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.807170 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qj8z\" (UniqueName: \"kubernetes.io/projected/c5968625-c396-4ae0-9846-c2ceb6baf655-kube-api-access-4qj8z\") pod \"perses-operator-5bf474d74f-2fdxm\" (UID: \"c5968625-c396-4ae0-9846-c2ceb6baf655\") " pod="openshift-operators/perses-operator-5bf474d74f-2fdxm" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.810158 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5968625-c396-4ae0-9846-c2ceb6baf655-openshift-service-ca\") pod \"perses-operator-5bf474d74f-2fdxm\" (UID: \"c5968625-c396-4ae0-9846-c2ceb6baf655\") " pod="openshift-operators/perses-operator-5bf474d74f-2fdxm" Feb 19 15:21:39 crc kubenswrapper[4810]: I0219 15:21:39.854366 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qj8z\" (UniqueName: \"kubernetes.io/projected/c5968625-c396-4ae0-9846-c2ceb6baf655-kube-api-access-4qj8z\") pod \"perses-operator-5bf474d74f-2fdxm\" (UID: \"c5968625-c396-4ae0-9846-c2ceb6baf655\") " pod="openshift-operators/perses-operator-5bf474d74f-2fdxm" Feb 19 15:21:40 crc kubenswrapper[4810]: I0219 15:21:40.029839 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2fdxm" Feb 19 15:21:40 crc kubenswrapper[4810]: I0219 15:21:40.066661 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-dk9c4"] Feb 19 15:21:40 crc kubenswrapper[4810]: W0219 15:21:40.088218 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bdf030e_92d8_45dc_ab6c_a7b241444677.slice/crio-48bd0ccfe86f887e37af92da9fce2d1dba7ea8e193d3d9b20e0cfa2118f0e40b WatchSource:0}: Error finding container 48bd0ccfe86f887e37af92da9fce2d1dba7ea8e193d3d9b20e0cfa2118f0e40b: Status 404 returned error can't find the container with id 48bd0ccfe86f887e37af92da9fce2d1dba7ea8e193d3d9b20e0cfa2118f0e40b Feb 19 15:21:40 crc kubenswrapper[4810]: I0219 15:21:40.164644 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr"] Feb 19 15:21:40 crc kubenswrapper[4810]: I0219 15:21:40.175064 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt"] Feb 19 15:21:40 crc kubenswrapper[4810]: W0219 15:21:40.192686 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5debcf2_9629_4bb2_9133_f4b81748ff7d.slice/crio-2bf939b44566aa5e4a0eeba65b734a5d16a8861ce489e6c630fb34f0d49e62a5 WatchSource:0}: Error finding container 2bf939b44566aa5e4a0eeba65b734a5d16a8861ce489e6c630fb34f0d49e62a5: Status 404 returned error can't find the container with id 2bf939b44566aa5e4a0eeba65b734a5d16a8861ce489e6c630fb34f0d49e62a5 Feb 19 15:21:40 crc kubenswrapper[4810]: I0219 15:21:40.240319 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-2fdxm"] Feb 19 15:21:40 crc kubenswrapper[4810]: I0219 15:21:40.458750 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" event={"ID":"8bdf030e-92d8-45dc-ab6c-a7b241444677","Type":"ContainerStarted","Data":"48bd0ccfe86f887e37af92da9fce2d1dba7ea8e193d3d9b20e0cfa2118f0e40b"} Feb 19 15:21:40 crc kubenswrapper[4810]: I0219 15:21:40.459932 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt" event={"ID":"d5debcf2-9629-4bb2-9133-f4b81748ff7d","Type":"ContainerStarted","Data":"2bf939b44566aa5e4a0eeba65b734a5d16a8861ce489e6c630fb34f0d49e62a5"} Feb 19 15:21:40 crc kubenswrapper[4810]: I0219 15:21:40.461896 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jkkz" event={"ID":"1656f52d-7771-4bbb-9642-b296d16b791e","Type":"ContainerStarted","Data":"df30e505ce981ea33f8fe527d2b7f8540bf4e32963710f4f8a4e3352e1d50c54"} Feb 19 15:21:40 crc kubenswrapper[4810]: I0219 15:21:40.463390 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-2fdxm" event={"ID":"c5968625-c396-4ae0-9846-c2ceb6baf655","Type":"ContainerStarted","Data":"5e2fc5928a3aa091ddb850b2d963a57cd85a0a72b462a1e672fec6d1ec5add72"} Feb 19 15:21:40 crc kubenswrapper[4810]: I0219 15:21:40.464739 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr" event={"ID":"408628c0-0b2c-48f9-b849-ee1b124499e1","Type":"ContainerStarted","Data":"443fc0f49ea2c02e6953693b08c3e8548561e2b0b271706a0a6a1d71103ec0e7"} Feb 19 15:21:49 crc kubenswrapper[4810]: I0219 15:21:49.537035 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:21:49 crc kubenswrapper[4810]: I0219 15:21:49.537364 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.541621 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-2fdxm" event={"ID":"c5968625-c396-4ae0-9846-c2ceb6baf655","Type":"ContainerStarted","Data":"e7dbe5fab8e78e629119fbda79854b2e4fa55be862c6926e2b039cf5c8814234"} Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.541884 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-2fdxm" Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.543077 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr" event={"ID":"408628c0-0b2c-48f9-b849-ee1b124499e1","Type":"ContainerStarted","Data":"590b9446c65c57df99bac890da7a85c8b41483862b9f3afe4e2d065c52146bd0"} Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.544368 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" event={"ID":"8bdf030e-92d8-45dc-ab6c-a7b241444677","Type":"ContainerStarted","Data":"a3a4520aad2042a32553e90a58bd4aadcc4654046a05f16d4c1b60ffd59ab716"} Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.544581 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.545650 4810 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-dk9c4 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.35:8081/healthz\": dial tcp 10.217.0.35:8081: connect: connection refused" start-of-body= Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.545686 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" podUID="8bdf030e-92d8-45dc-ab6c-a7b241444677" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.35:8081/healthz\": dial tcp 10.217.0.35:8081: connect: connection refused" Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.545701 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt" event={"ID":"d5debcf2-9629-4bb2-9133-f4b81748ff7d","Type":"ContainerStarted","Data":"b18e65b91e8f057e3c3b6ae93bd4918488d4e8b38a80531b1e6a27366a4cc66c"} Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.547143 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jkkz" event={"ID":"1656f52d-7771-4bbb-9642-b296d16b791e","Type":"ContainerStarted","Data":"50d98e73af9efeaa986f94b0e2869529dba239ae8a9d8832f6c0d197a582c3de"} Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.561250 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-2fdxm" podStartSLOduration=1.652078628 podStartE2EDuration="11.561230833s" podCreationTimestamp="2026-02-19 15:21:39 +0000 UTC" firstStartedPulling="2026-02-19 15:21:40.253567842 +0000 UTC m=+729.735597966" lastFinishedPulling="2026-02-19 15:21:50.162720047 +0000 UTC m=+739.644750171" observedRunningTime="2026-02-19 15:21:50.55747254 +0000 UTC m=+740.039502664" watchObservedRunningTime="2026-02-19 15:21:50.561230833 +0000 UTC m=+740.043260957" Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.588318 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt" podStartSLOduration=1.621156424 podStartE2EDuration="11.588302782s" podCreationTimestamp="2026-02-19 15:21:39 +0000 UTC" firstStartedPulling="2026-02-19 15:21:40.195244071 +0000 UTC m=+729.677274195" lastFinishedPulling="2026-02-19 15:21:50.162390429 +0000 UTC m=+739.644420553" observedRunningTime="2026-02-19 15:21:50.586836485 +0000 UTC m=+740.068866609" watchObservedRunningTime="2026-02-19 15:21:50.588302782 +0000 UTC m=+740.070332896" Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.618410 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jkkz" podStartSLOduration=1.188823873 podStartE2EDuration="11.618391795s" podCreationTimestamp="2026-02-19 15:21:39 +0000 UTC" firstStartedPulling="2026-02-19 15:21:39.732662023 +0000 UTC m=+729.214692147" lastFinishedPulling="2026-02-19 15:21:50.162229925 +0000 UTC m=+739.644260069" observedRunningTime="2026-02-19 15:21:50.61416077 +0000 UTC m=+740.096190894" watchObservedRunningTime="2026-02-19 15:21:50.618391795 +0000 UTC m=+740.100421919" Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.645868 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" podStartSLOduration=1.534729938 podStartE2EDuration="11.645852973s" podCreationTimestamp="2026-02-19 15:21:39 +0000 UTC" firstStartedPulling="2026-02-19 15:21:40.090989866 +0000 UTC m=+729.573019990" lastFinishedPulling="2026-02-19 15:21:50.202112901 +0000 UTC m=+739.684143025" observedRunningTime="2026-02-19 15:21:50.643140776 +0000 UTC m=+740.125170900" watchObservedRunningTime="2026-02-19 15:21:50.645852973 +0000 UTC m=+740.127883097" Feb 19 15:21:50 crc kubenswrapper[4810]: I0219 15:21:50.668824 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr" podStartSLOduration=1.6254323400000001 podStartE2EDuration="11.668807201s" podCreationTimestamp="2026-02-19 15:21:39 +0000 UTC" firstStartedPulling="2026-02-19 15:21:40.179425241 +0000 UTC m=+729.661455365" lastFinishedPulling="2026-02-19 15:21:50.222800102 +0000 UTC m=+739.704830226" observedRunningTime="2026-02-19 15:21:50.66758223 +0000 UTC m=+740.149612374" watchObservedRunningTime="2026-02-19 15:21:50.668807201 +0000 UTC m=+740.150837325" Feb 19 15:21:51 crc kubenswrapper[4810]: I0219 15:21:51.582876 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-dk9c4" Feb 19 15:22:00 crc kubenswrapper[4810]: I0219 15:22:00.032355 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-2fdxm" Feb 19 15:22:06 crc kubenswrapper[4810]: I0219 15:22:06.692263 4810 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 15:22:15 crc kubenswrapper[4810]: I0219 15:22:15.998350 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578"] Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.000790 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.004469 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578"] Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.006683 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.167500 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/815191f4-9d3a-4003-a32f-de4f76c9c15f-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578\" (UID: \"815191f4-9d3a-4003-a32f-de4f76c9c15f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.167566 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/815191f4-9d3a-4003-a32f-de4f76c9c15f-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578\" (UID: \"815191f4-9d3a-4003-a32f-de4f76c9c15f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.167661 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd6ks\" (UniqueName: \"kubernetes.io/projected/815191f4-9d3a-4003-a32f-de4f76c9c15f-kube-api-access-xd6ks\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578\" (UID: \"815191f4-9d3a-4003-a32f-de4f76c9c15f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.268439 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/815191f4-9d3a-4003-a32f-de4f76c9c15f-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578\" (UID: \"815191f4-9d3a-4003-a32f-de4f76c9c15f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.268494 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/815191f4-9d3a-4003-a32f-de4f76c9c15f-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578\" (UID: \"815191f4-9d3a-4003-a32f-de4f76c9c15f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.268568 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd6ks\" (UniqueName: \"kubernetes.io/projected/815191f4-9d3a-4003-a32f-de4f76c9c15f-kube-api-access-xd6ks\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578\" (UID: \"815191f4-9d3a-4003-a32f-de4f76c9c15f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.269255 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/815191f4-9d3a-4003-a32f-de4f76c9c15f-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578\" (UID: \"815191f4-9d3a-4003-a32f-de4f76c9c15f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.269401 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/815191f4-9d3a-4003-a32f-de4f76c9c15f-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578\" (UID: \"815191f4-9d3a-4003-a32f-de4f76c9c15f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.293880 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd6ks\" (UniqueName: \"kubernetes.io/projected/815191f4-9d3a-4003-a32f-de4f76c9c15f-kube-api-access-xd6ks\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578\" (UID: \"815191f4-9d3a-4003-a32f-de4f76c9c15f\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.321053 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" Feb 19 15:22:16 crc kubenswrapper[4810]: I0219 15:22:16.801006 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578"] Feb 19 15:22:17 crc kubenswrapper[4810]: I0219 15:22:17.714447 4810 generic.go:334] "Generic (PLEG): container finished" podID="815191f4-9d3a-4003-a32f-de4f76c9c15f" containerID="a640c129b436c481969955036241e624b9a8fdc35ec10a9e96ec135a65005697" exitCode=0 Feb 19 15:22:17 crc kubenswrapper[4810]: I0219 15:22:17.714496 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" event={"ID":"815191f4-9d3a-4003-a32f-de4f76c9c15f","Type":"ContainerDied","Data":"a640c129b436c481969955036241e624b9a8fdc35ec10a9e96ec135a65005697"} Feb 19 15:22:17 crc kubenswrapper[4810]: I0219 15:22:17.714527 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" event={"ID":"815191f4-9d3a-4003-a32f-de4f76c9c15f","Type":"ContainerStarted","Data":"1f4ec315c7d6f364ebdb7e18d891ff326300f300ee5ca683540cb86943d9a50a"} Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.369691 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2mft7"] Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.372238 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.379230 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2mft7"] Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.528240 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132beac7-bb25-4b52-863e-0a113dc6799b-catalog-content\") pod \"redhat-operators-2mft7\" (UID: \"132beac7-bb25-4b52-863e-0a113dc6799b\") " pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.528386 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132beac7-bb25-4b52-863e-0a113dc6799b-utilities\") pod \"redhat-operators-2mft7\" (UID: \"132beac7-bb25-4b52-863e-0a113dc6799b\") " pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.528633 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbj82\" (UniqueName: \"kubernetes.io/projected/132beac7-bb25-4b52-863e-0a113dc6799b-kube-api-access-wbj82\") pod \"redhat-operators-2mft7\" (UID: \"132beac7-bb25-4b52-863e-0a113dc6799b\") " pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.630401 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbj82\" (UniqueName: \"kubernetes.io/projected/132beac7-bb25-4b52-863e-0a113dc6799b-kube-api-access-wbj82\") pod \"redhat-operators-2mft7\" (UID: \"132beac7-bb25-4b52-863e-0a113dc6799b\") " pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.630524 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132beac7-bb25-4b52-863e-0a113dc6799b-catalog-content\") pod \"redhat-operators-2mft7\" (UID: \"132beac7-bb25-4b52-863e-0a113dc6799b\") " pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.630550 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132beac7-bb25-4b52-863e-0a113dc6799b-utilities\") pod \"redhat-operators-2mft7\" (UID: \"132beac7-bb25-4b52-863e-0a113dc6799b\") " pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.631249 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132beac7-bb25-4b52-863e-0a113dc6799b-utilities\") pod \"redhat-operators-2mft7\" (UID: \"132beac7-bb25-4b52-863e-0a113dc6799b\") " pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.631386 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132beac7-bb25-4b52-863e-0a113dc6799b-catalog-content\") pod \"redhat-operators-2mft7\" (UID: \"132beac7-bb25-4b52-863e-0a113dc6799b\") " pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.666955 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbj82\" (UniqueName: \"kubernetes.io/projected/132beac7-bb25-4b52-863e-0a113dc6799b-kube-api-access-wbj82\") pod \"redhat-operators-2mft7\" (UID: \"132beac7-bb25-4b52-863e-0a113dc6799b\") " pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.742281 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:18 crc kubenswrapper[4810]: I0219 15:22:18.968774 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2mft7"] Feb 19 15:22:18 crc kubenswrapper[4810]: W0219 15:22:18.975695 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod132beac7_bb25_4b52_863e_0a113dc6799b.slice/crio-13841f7eec8b89d2fb39be72b7796fc00732025d2bc4c95c5e8d5b290f1c7821 WatchSource:0}: Error finding container 13841f7eec8b89d2fb39be72b7796fc00732025d2bc4c95c5e8d5b290f1c7821: Status 404 returned error can't find the container with id 13841f7eec8b89d2fb39be72b7796fc00732025d2bc4c95c5e8d5b290f1c7821 Feb 19 15:22:19 crc kubenswrapper[4810]: I0219 15:22:19.538927 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:22:19 crc kubenswrapper[4810]: I0219 15:22:19.539021 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:22:19 crc kubenswrapper[4810]: I0219 15:22:19.539095 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:22:19 crc kubenswrapper[4810]: I0219 15:22:19.540061 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"946b41284ed03248aebd830c7fb80426be59078e4ea2a93cd09930514fedec98"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:22:19 crc kubenswrapper[4810]: I0219 15:22:19.540181 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://946b41284ed03248aebd830c7fb80426be59078e4ea2a93cd09930514fedec98" gracePeriod=600 Feb 19 15:22:19 crc kubenswrapper[4810]: I0219 15:22:19.727197 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="946b41284ed03248aebd830c7fb80426be59078e4ea2a93cd09930514fedec98" exitCode=0 Feb 19 15:22:19 crc kubenswrapper[4810]: I0219 15:22:19.727268 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"946b41284ed03248aebd830c7fb80426be59078e4ea2a93cd09930514fedec98"} Feb 19 15:22:19 crc kubenswrapper[4810]: I0219 15:22:19.727316 4810 scope.go:117] "RemoveContainer" containerID="b445a122b966a7403b8df4638cf97036239996a24e8ace1fab9b55e591849bf5" Feb 19 15:22:19 crc kubenswrapper[4810]: I0219 15:22:19.729248 4810 generic.go:334] "Generic (PLEG): container finished" podID="815191f4-9d3a-4003-a32f-de4f76c9c15f" containerID="28fd19fef824811305dd55bf91f97cdcd9584e932d6d41db8a7f50154ca5ad1b" exitCode=0 Feb 19 15:22:19 crc kubenswrapper[4810]: I0219 15:22:19.729361 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" event={"ID":"815191f4-9d3a-4003-a32f-de4f76c9c15f","Type":"ContainerDied","Data":"28fd19fef824811305dd55bf91f97cdcd9584e932d6d41db8a7f50154ca5ad1b"} Feb 19 15:22:19 crc kubenswrapper[4810]: I0219 15:22:19.731261 4810 generic.go:334] "Generic (PLEG): container finished" podID="132beac7-bb25-4b52-863e-0a113dc6799b" containerID="b5538334e24ef6c83330097d1f5b71ee7b748811fc18ccd063319fa24e2faa21" exitCode=0 Feb 19 15:22:19 crc kubenswrapper[4810]: I0219 15:22:19.731306 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mft7" event={"ID":"132beac7-bb25-4b52-863e-0a113dc6799b","Type":"ContainerDied","Data":"b5538334e24ef6c83330097d1f5b71ee7b748811fc18ccd063319fa24e2faa21"} Feb 19 15:22:19 crc kubenswrapper[4810]: I0219 15:22:19.731352 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mft7" event={"ID":"132beac7-bb25-4b52-863e-0a113dc6799b","Type":"ContainerStarted","Data":"13841f7eec8b89d2fb39be72b7796fc00732025d2bc4c95c5e8d5b290f1c7821"} Feb 19 15:22:22 crc kubenswrapper[4810]: I0219 15:22:22.755400 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"6c88e0127771a4aa28c6261d9a83da29a3f930023146271a9d942e738f8152ff"} Feb 19 15:22:22 crc kubenswrapper[4810]: I0219 15:22:22.759732 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" event={"ID":"815191f4-9d3a-4003-a32f-de4f76c9c15f","Type":"ContainerStarted","Data":"bb0e4b414f182e075cb0d5b61693af2b1627fe337800fe9cf5cfd83cacbbe05c"} Feb 19 15:22:22 crc kubenswrapper[4810]: I0219 15:22:22.802033 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" podStartSLOduration=6.621601321 podStartE2EDuration="7.802002583s" podCreationTimestamp="2026-02-19 15:22:15 +0000 UTC" firstStartedPulling="2026-02-19 15:22:17.716615348 +0000 UTC m=+767.198645472" lastFinishedPulling="2026-02-19 15:22:18.89701662 +0000 UTC m=+768.379046734" observedRunningTime="2026-02-19 15:22:22.79621889 +0000 UTC m=+772.278249054" watchObservedRunningTime="2026-02-19 15:22:22.802002583 +0000 UTC m=+772.284032747" Feb 19 15:22:23 crc kubenswrapper[4810]: I0219 15:22:23.772084 4810 generic.go:334] "Generic (PLEG): container finished" podID="815191f4-9d3a-4003-a32f-de4f76c9c15f" containerID="bb0e4b414f182e075cb0d5b61693af2b1627fe337800fe9cf5cfd83cacbbe05c" exitCode=0 Feb 19 15:22:23 crc kubenswrapper[4810]: I0219 15:22:23.772217 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" event={"ID":"815191f4-9d3a-4003-a32f-de4f76c9c15f","Type":"ContainerDied","Data":"bb0e4b414f182e075cb0d5b61693af2b1627fe337800fe9cf5cfd83cacbbe05c"} Feb 19 15:22:23 crc kubenswrapper[4810]: I0219 15:22:23.775832 4810 generic.go:334] "Generic (PLEG): container finished" podID="132beac7-bb25-4b52-863e-0a113dc6799b" containerID="585ee91571796fba78ba8e4f41233d19b09ebef57aa467f35d92b53d4fb2aa16" exitCode=0 Feb 19 15:22:23 crc kubenswrapper[4810]: I0219 15:22:23.775898 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mft7" event={"ID":"132beac7-bb25-4b52-863e-0a113dc6799b","Type":"ContainerDied","Data":"585ee91571796fba78ba8e4f41233d19b09ebef57aa467f35d92b53d4fb2aa16"} Feb 19 15:22:24 crc kubenswrapper[4810]: I0219 15:22:24.785836 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mft7" event={"ID":"132beac7-bb25-4b52-863e-0a113dc6799b","Type":"ContainerStarted","Data":"50094d034a84c3ff10e55fa2dc439e3f2fd43638ff98866ab58aa48ac171229c"} Feb 19 15:22:24 crc kubenswrapper[4810]: I0219 15:22:24.812932 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2mft7" podStartSLOduration=2.362759411 podStartE2EDuration="6.812910202s" podCreationTimestamp="2026-02-19 15:22:18 +0000 UTC" firstStartedPulling="2026-02-19 15:22:19.733231059 +0000 UTC m=+769.215261183" lastFinishedPulling="2026-02-19 15:22:24.18338185 +0000 UTC m=+773.665411974" observedRunningTime="2026-02-19 15:22:24.811072756 +0000 UTC m=+774.293102920" watchObservedRunningTime="2026-02-19 15:22:24.812910202 +0000 UTC m=+774.294940346" Feb 19 15:22:25 crc kubenswrapper[4810]: I0219 15:22:25.107262 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" Feb 19 15:22:25 crc kubenswrapper[4810]: I0219 15:22:25.224213 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd6ks\" (UniqueName: \"kubernetes.io/projected/815191f4-9d3a-4003-a32f-de4f76c9c15f-kube-api-access-xd6ks\") pod \"815191f4-9d3a-4003-a32f-de4f76c9c15f\" (UID: \"815191f4-9d3a-4003-a32f-de4f76c9c15f\") " Feb 19 15:22:25 crc kubenswrapper[4810]: I0219 15:22:25.224323 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/815191f4-9d3a-4003-a32f-de4f76c9c15f-bundle\") pod \"815191f4-9d3a-4003-a32f-de4f76c9c15f\" (UID: \"815191f4-9d3a-4003-a32f-de4f76c9c15f\") " Feb 19 15:22:25 crc kubenswrapper[4810]: I0219 15:22:25.224812 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815191f4-9d3a-4003-a32f-de4f76c9c15f-bundle" (OuterVolumeSpecName: "bundle") pod "815191f4-9d3a-4003-a32f-de4f76c9c15f" (UID: "815191f4-9d3a-4003-a32f-de4f76c9c15f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:22:25 crc kubenswrapper[4810]: I0219 15:22:25.225219 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/815191f4-9d3a-4003-a32f-de4f76c9c15f-util\") pod \"815191f4-9d3a-4003-a32f-de4f76c9c15f\" (UID: \"815191f4-9d3a-4003-a32f-de4f76c9c15f\") " Feb 19 15:22:25 crc kubenswrapper[4810]: I0219 15:22:25.225620 4810 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/815191f4-9d3a-4003-a32f-de4f76c9c15f-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:22:25 crc kubenswrapper[4810]: I0219 15:22:25.234686 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815191f4-9d3a-4003-a32f-de4f76c9c15f-kube-api-access-xd6ks" (OuterVolumeSpecName: "kube-api-access-xd6ks") pod "815191f4-9d3a-4003-a32f-de4f76c9c15f" (UID: "815191f4-9d3a-4003-a32f-de4f76c9c15f"). InnerVolumeSpecName "kube-api-access-xd6ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:22:25 crc kubenswrapper[4810]: I0219 15:22:25.235645 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815191f4-9d3a-4003-a32f-de4f76c9c15f-util" (OuterVolumeSpecName: "util") pod "815191f4-9d3a-4003-a32f-de4f76c9c15f" (UID: "815191f4-9d3a-4003-a32f-de4f76c9c15f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:22:25 crc kubenswrapper[4810]: I0219 15:22:25.326876 4810 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/815191f4-9d3a-4003-a32f-de4f76c9c15f-util\") on node \"crc\" DevicePath \"\"" Feb 19 15:22:25 crc kubenswrapper[4810]: I0219 15:22:25.326929 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd6ks\" (UniqueName: \"kubernetes.io/projected/815191f4-9d3a-4003-a32f-de4f76c9c15f-kube-api-access-xd6ks\") on node \"crc\" DevicePath \"\"" Feb 19 15:22:25 crc kubenswrapper[4810]: I0219 15:22:25.793232 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" event={"ID":"815191f4-9d3a-4003-a32f-de4f76c9c15f","Type":"ContainerDied","Data":"1f4ec315c7d6f364ebdb7e18d891ff326300f300ee5ca683540cb86943d9a50a"} Feb 19 15:22:25 crc kubenswrapper[4810]: I0219 15:22:25.793272 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578" Feb 19 15:22:25 crc kubenswrapper[4810]: I0219 15:22:25.793277 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f4ec315c7d6f364ebdb7e18d891ff326300f300ee5ca683540cb86943d9a50a" Feb 19 15:22:26 crc kubenswrapper[4810]: I0219 15:22:26.748265 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-p4hwg"] Feb 19 15:22:26 crc kubenswrapper[4810]: E0219 15:22:26.748925 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815191f4-9d3a-4003-a32f-de4f76c9c15f" containerName="pull" Feb 19 15:22:26 crc kubenswrapper[4810]: I0219 15:22:26.748941 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="815191f4-9d3a-4003-a32f-de4f76c9c15f" containerName="pull" Feb 19 15:22:26 crc kubenswrapper[4810]: E0219 15:22:26.748956 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815191f4-9d3a-4003-a32f-de4f76c9c15f" containerName="extract" Feb 19 15:22:26 crc kubenswrapper[4810]: I0219 15:22:26.748963 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="815191f4-9d3a-4003-a32f-de4f76c9c15f" containerName="extract" Feb 19 15:22:26 crc kubenswrapper[4810]: E0219 15:22:26.748980 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815191f4-9d3a-4003-a32f-de4f76c9c15f" containerName="util" Feb 19 15:22:26 crc kubenswrapper[4810]: I0219 15:22:26.748988 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="815191f4-9d3a-4003-a32f-de4f76c9c15f" containerName="util" Feb 19 15:22:26 crc kubenswrapper[4810]: I0219 15:22:26.749109 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="815191f4-9d3a-4003-a32f-de4f76c9c15f" containerName="extract" Feb 19 15:22:26 crc kubenswrapper[4810]: I0219 15:22:26.759629 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-p4hwg" Feb 19 15:22:26 crc kubenswrapper[4810]: I0219 15:22:26.763621 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 15:22:26 crc kubenswrapper[4810]: I0219 15:22:26.763894 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 15:22:26 crc kubenswrapper[4810]: I0219 15:22:26.763961 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-vhxj2" Feb 19 15:22:26 crc kubenswrapper[4810]: I0219 15:22:26.779477 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-p4hwg"] Feb 19 15:22:26 crc kubenswrapper[4810]: I0219 15:22:26.845173 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s8s4\" (UniqueName: \"kubernetes.io/projected/f8300a06-7526-4da5-89a6-7fff8ff284c9-kube-api-access-5s8s4\") pod \"nmstate-operator-694c9596b7-p4hwg\" (UID: \"f8300a06-7526-4da5-89a6-7fff8ff284c9\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-p4hwg" Feb 19 15:22:26 crc kubenswrapper[4810]: I0219 15:22:26.945887 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s8s4\" (UniqueName: \"kubernetes.io/projected/f8300a06-7526-4da5-89a6-7fff8ff284c9-kube-api-access-5s8s4\") pod \"nmstate-operator-694c9596b7-p4hwg\" (UID: \"f8300a06-7526-4da5-89a6-7fff8ff284c9\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-p4hwg" Feb 19 15:22:26 crc kubenswrapper[4810]: I0219 15:22:26.968889 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s8s4\" (UniqueName: \"kubernetes.io/projected/f8300a06-7526-4da5-89a6-7fff8ff284c9-kube-api-access-5s8s4\") pod \"nmstate-operator-694c9596b7-p4hwg\" (UID: \"f8300a06-7526-4da5-89a6-7fff8ff284c9\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-p4hwg" Feb 19 15:22:27 crc kubenswrapper[4810]: I0219 15:22:27.077359 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-p4hwg" Feb 19 15:22:27 crc kubenswrapper[4810]: I0219 15:22:27.504073 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-p4hwg"] Feb 19 15:22:27 crc kubenswrapper[4810]: W0219 15:22:27.506403 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8300a06_7526_4da5_89a6_7fff8ff284c9.slice/crio-2067d8a741f7f2a527f960baa170366ccceafe1a7e87baec91b176022f78a46a WatchSource:0}: Error finding container 2067d8a741f7f2a527f960baa170366ccceafe1a7e87baec91b176022f78a46a: Status 404 returned error can't find the container with id 2067d8a741f7f2a527f960baa170366ccceafe1a7e87baec91b176022f78a46a Feb 19 15:22:27 crc kubenswrapper[4810]: I0219 15:22:27.805805 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-p4hwg" event={"ID":"f8300a06-7526-4da5-89a6-7fff8ff284c9","Type":"ContainerStarted","Data":"2067d8a741f7f2a527f960baa170366ccceafe1a7e87baec91b176022f78a46a"} Feb 19 15:22:28 crc kubenswrapper[4810]: I0219 15:22:28.743282 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:28 crc kubenswrapper[4810]: I0219 15:22:28.743656 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:29 crc kubenswrapper[4810]: I0219 15:22:29.785257 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2mft7" podUID="132beac7-bb25-4b52-863e-0a113dc6799b" containerName="registry-server" probeResult="failure" output=< Feb 19 15:22:29 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 15:22:29 crc kubenswrapper[4810]: > Feb 19 15:22:30 crc kubenswrapper[4810]: I0219 15:22:30.824252 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-p4hwg" event={"ID":"f8300a06-7526-4da5-89a6-7fff8ff284c9","Type":"ContainerStarted","Data":"831f71f6a9fb06cf0c9a8008a94f0058b5d372ffaa13ac7aedf8cc1032fa923d"} Feb 19 15:22:30 crc kubenswrapper[4810]: I0219 15:22:30.851555 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-p4hwg" podStartSLOduration=2.157005727 podStartE2EDuration="4.851532755s" podCreationTimestamp="2026-02-19 15:22:26 +0000 UTC" firstStartedPulling="2026-02-19 15:22:27.508912617 +0000 UTC m=+776.990942741" lastFinishedPulling="2026-02-19 15:22:30.203439605 +0000 UTC m=+779.685469769" observedRunningTime="2026-02-19 15:22:30.845835974 +0000 UTC m=+780.327866108" watchObservedRunningTime="2026-02-19 15:22:30.851532755 +0000 UTC m=+780.333562889" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.301690 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4g952"] Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.302897 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4g952" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.309435 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mcdnc" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.326164 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq"] Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.327593 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.330997 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.338272 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-bhhvv"] Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.338979 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.347953 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4g952"] Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.373612 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq"] Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.382353 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c0eb0835-6df5-4a21-b309-f178a032d027-nmstate-lock\") pod \"nmstate-handler-bhhvv\" (UID: \"c0eb0835-6df5-4a21-b309-f178a032d027\") " pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.382403 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c0eb0835-6df5-4a21-b309-f178a032d027-dbus-socket\") pod \"nmstate-handler-bhhvv\" (UID: \"c0eb0835-6df5-4a21-b309-f178a032d027\") " pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.382444 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shtmz\" (UniqueName: \"kubernetes.io/projected/c0eb0835-6df5-4a21-b309-f178a032d027-kube-api-access-shtmz\") pod \"nmstate-handler-bhhvv\" (UID: \"c0eb0835-6df5-4a21-b309-f178a032d027\") " pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.382476 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbnkc\" (UniqueName: \"kubernetes.io/projected/ce589619-7c2f-43db-ae4f-fb43be7b07f4-kube-api-access-wbnkc\") pod \"nmstate-metrics-58c85c668d-4g952\" (UID: \"ce589619-7c2f-43db-ae4f-fb43be7b07f4\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4g952" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.382508 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/db05e782-a3d7-4cbe-be3f-f6226d894864-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-ckvvq\" (UID: \"db05e782-a3d7-4cbe-be3f-f6226d894864\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.382545 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c0eb0835-6df5-4a21-b309-f178a032d027-ovs-socket\") pod \"nmstate-handler-bhhvv\" (UID: \"c0eb0835-6df5-4a21-b309-f178a032d027\") " pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.382591 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxtpt\" (UniqueName: \"kubernetes.io/projected/db05e782-a3d7-4cbe-be3f-f6226d894864-kube-api-access-cxtpt\") pod \"nmstate-webhook-866bcb46dc-ckvvq\" (UID: \"db05e782-a3d7-4cbe-be3f-f6226d894864\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.449413 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx"] Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.450080 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.453537 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.453598 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-lp2rk" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.453699 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.460889 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx"] Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.484167 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c0eb0835-6df5-4a21-b309-f178a032d027-dbus-socket\") pod \"nmstate-handler-bhhvv\" (UID: \"c0eb0835-6df5-4a21-b309-f178a032d027\") " pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.484235 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shtmz\" (UniqueName: \"kubernetes.io/projected/c0eb0835-6df5-4a21-b309-f178a032d027-kube-api-access-shtmz\") pod \"nmstate-handler-bhhvv\" (UID: \"c0eb0835-6df5-4a21-b309-f178a032d027\") " pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.484285 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbnkc\" (UniqueName: \"kubernetes.io/projected/ce589619-7c2f-43db-ae4f-fb43be7b07f4-kube-api-access-wbnkc\") pod \"nmstate-metrics-58c85c668d-4g952\" (UID: \"ce589619-7c2f-43db-ae4f-fb43be7b07f4\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4g952" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.484318 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/db05e782-a3d7-4cbe-be3f-f6226d894864-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-ckvvq\" (UID: \"db05e782-a3d7-4cbe-be3f-f6226d894864\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.484373 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75vs2\" (UniqueName: \"kubernetes.io/projected/35fc682a-0cc9-4922-a2f2-60da1ddb1eb9-kube-api-access-75vs2\") pod \"nmstate-console-plugin-5c78fc5d65-kdwwx\" (UID: \"35fc682a-0cc9-4922-a2f2-60da1ddb1eb9\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.484402 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/35fc682a-0cc9-4922-a2f2-60da1ddb1eb9-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-kdwwx\" (UID: \"35fc682a-0cc9-4922-a2f2-60da1ddb1eb9\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.484457 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c0eb0835-6df5-4a21-b309-f178a032d027-ovs-socket\") pod \"nmstate-handler-bhhvv\" (UID: \"c0eb0835-6df5-4a21-b309-f178a032d027\") " pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.484578 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c0eb0835-6df5-4a21-b309-f178a032d027-dbus-socket\") pod \"nmstate-handler-bhhvv\" (UID: \"c0eb0835-6df5-4a21-b309-f178a032d027\") " pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: E0219 15:22:37.484588 4810 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.484624 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c0eb0835-6df5-4a21-b309-f178a032d027-ovs-socket\") pod \"nmstate-handler-bhhvv\" (UID: \"c0eb0835-6df5-4a21-b309-f178a032d027\") " pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.484919 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/35fc682a-0cc9-4922-a2f2-60da1ddb1eb9-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-kdwwx\" (UID: \"35fc682a-0cc9-4922-a2f2-60da1ddb1eb9\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.485027 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxtpt\" (UniqueName: \"kubernetes.io/projected/db05e782-a3d7-4cbe-be3f-f6226d894864-kube-api-access-cxtpt\") pod \"nmstate-webhook-866bcb46dc-ckvvq\" (UID: \"db05e782-a3d7-4cbe-be3f-f6226d894864\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" Feb 19 15:22:37 crc kubenswrapper[4810]: E0219 15:22:37.485073 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db05e782-a3d7-4cbe-be3f-f6226d894864-tls-key-pair podName:db05e782-a3d7-4cbe-be3f-f6226d894864 nodeName:}" failed. No retries permitted until 2026-02-19 15:22:37.985050362 +0000 UTC m=+787.467080496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/db05e782-a3d7-4cbe-be3f-f6226d894864-tls-key-pair") pod "nmstate-webhook-866bcb46dc-ckvvq" (UID: "db05e782-a3d7-4cbe-be3f-f6226d894864") : secret "openshift-nmstate-webhook" not found Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.485126 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c0eb0835-6df5-4a21-b309-f178a032d027-nmstate-lock\") pod \"nmstate-handler-bhhvv\" (UID: \"c0eb0835-6df5-4a21-b309-f178a032d027\") " pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.485238 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c0eb0835-6df5-4a21-b309-f178a032d027-nmstate-lock\") pod \"nmstate-handler-bhhvv\" (UID: \"c0eb0835-6df5-4a21-b309-f178a032d027\") " pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.503366 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shtmz\" (UniqueName: \"kubernetes.io/projected/c0eb0835-6df5-4a21-b309-f178a032d027-kube-api-access-shtmz\") pod \"nmstate-handler-bhhvv\" (UID: \"c0eb0835-6df5-4a21-b309-f178a032d027\") " pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.503546 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxtpt\" (UniqueName: \"kubernetes.io/projected/db05e782-a3d7-4cbe-be3f-f6226d894864-kube-api-access-cxtpt\") pod \"nmstate-webhook-866bcb46dc-ckvvq\" (UID: \"db05e782-a3d7-4cbe-be3f-f6226d894864\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.514415 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbnkc\" (UniqueName: \"kubernetes.io/projected/ce589619-7c2f-43db-ae4f-fb43be7b07f4-kube-api-access-wbnkc\") pod \"nmstate-metrics-58c85c668d-4g952\" (UID: \"ce589619-7c2f-43db-ae4f-fb43be7b07f4\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4g952" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.585711 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75vs2\" (UniqueName: \"kubernetes.io/projected/35fc682a-0cc9-4922-a2f2-60da1ddb1eb9-kube-api-access-75vs2\") pod \"nmstate-console-plugin-5c78fc5d65-kdwwx\" (UID: \"35fc682a-0cc9-4922-a2f2-60da1ddb1eb9\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.586032 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/35fc682a-0cc9-4922-a2f2-60da1ddb1eb9-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-kdwwx\" (UID: \"35fc682a-0cc9-4922-a2f2-60da1ddb1eb9\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.586059 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/35fc682a-0cc9-4922-a2f2-60da1ddb1eb9-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-kdwwx\" (UID: \"35fc682a-0cc9-4922-a2f2-60da1ddb1eb9\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.587228 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/35fc682a-0cc9-4922-a2f2-60da1ddb1eb9-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-kdwwx\" (UID: \"35fc682a-0cc9-4922-a2f2-60da1ddb1eb9\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.589041 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/35fc682a-0cc9-4922-a2f2-60da1ddb1eb9-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-kdwwx\" (UID: \"35fc682a-0cc9-4922-a2f2-60da1ddb1eb9\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.608088 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75vs2\" (UniqueName: \"kubernetes.io/projected/35fc682a-0cc9-4922-a2f2-60da1ddb1eb9-kube-api-access-75vs2\") pod \"nmstate-console-plugin-5c78fc5d65-kdwwx\" (UID: \"35fc682a-0cc9-4922-a2f2-60da1ddb1eb9\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.623654 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4g952" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.648061 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6748b4f7c7-98l2r"] Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.648933 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.662171 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6748b4f7c7-98l2r"] Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.664479 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.686637 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/652c3d69-0b00-46b4-a0d7-752de7f222aa-service-ca\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.686706 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/652c3d69-0b00-46b4-a0d7-752de7f222aa-console-config\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.686738 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw7f7\" (UniqueName: \"kubernetes.io/projected/652c3d69-0b00-46b4-a0d7-752de7f222aa-kube-api-access-fw7f7\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.686768 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/652c3d69-0b00-46b4-a0d7-752de7f222aa-console-oauth-config\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.686790 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/652c3d69-0b00-46b4-a0d7-752de7f222aa-trusted-ca-bundle\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.686820 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/652c3d69-0b00-46b4-a0d7-752de7f222aa-console-serving-cert\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.686851 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/652c3d69-0b00-46b4-a0d7-752de7f222aa-oauth-serving-cert\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.765909 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.788017 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/652c3d69-0b00-46b4-a0d7-752de7f222aa-console-serving-cert\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.788345 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/652c3d69-0b00-46b4-a0d7-752de7f222aa-oauth-serving-cert\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.788401 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/652c3d69-0b00-46b4-a0d7-752de7f222aa-service-ca\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.788439 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/652c3d69-0b00-46b4-a0d7-752de7f222aa-console-config\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.788484 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw7f7\" (UniqueName: \"kubernetes.io/projected/652c3d69-0b00-46b4-a0d7-752de7f222aa-kube-api-access-fw7f7\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.788513 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/652c3d69-0b00-46b4-a0d7-752de7f222aa-console-oauth-config\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.788545 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/652c3d69-0b00-46b4-a0d7-752de7f222aa-trusted-ca-bundle\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.789727 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/652c3d69-0b00-46b4-a0d7-752de7f222aa-oauth-serving-cert\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.789948 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/652c3d69-0b00-46b4-a0d7-752de7f222aa-service-ca\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.791464 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/652c3d69-0b00-46b4-a0d7-752de7f222aa-console-config\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.791483 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/652c3d69-0b00-46b4-a0d7-752de7f222aa-trusted-ca-bundle\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.795032 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/652c3d69-0b00-46b4-a0d7-752de7f222aa-console-serving-cert\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.795764 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/652c3d69-0b00-46b4-a0d7-752de7f222aa-console-oauth-config\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.803640 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw7f7\" (UniqueName: \"kubernetes.io/projected/652c3d69-0b00-46b4-a0d7-752de7f222aa-kube-api-access-fw7f7\") pod \"console-6748b4f7c7-98l2r\" (UID: \"652c3d69-0b00-46b4-a0d7-752de7f222aa\") " pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.877014 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bhhvv" event={"ID":"c0eb0835-6df5-4a21-b309-f178a032d027","Type":"ContainerStarted","Data":"3a2c5b0a2db448eec40a84472f94f2c5eb429dcca9879fce0a98035848292174"} Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.943507 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx"] Feb 19 15:22:37 crc kubenswrapper[4810]: W0219 15:22:37.949614 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35fc682a_0cc9_4922_a2f2_60da1ddb1eb9.slice/crio-b859ff50f0ab64c1c062a20f499cf15998f09e60a2f0007efdcb544b17f21d7c WatchSource:0}: Error finding container b859ff50f0ab64c1c062a20f499cf15998f09e60a2f0007efdcb544b17f21d7c: Status 404 returned error can't find the container with id b859ff50f0ab64c1c062a20f499cf15998f09e60a2f0007efdcb544b17f21d7c Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.991080 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/db05e782-a3d7-4cbe-be3f-f6226d894864-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-ckvvq\" (UID: \"db05e782-a3d7-4cbe-be3f-f6226d894864\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.991669 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:37 crc kubenswrapper[4810]: I0219 15:22:37.994391 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/db05e782-a3d7-4cbe-be3f-f6226d894864-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-ckvvq\" (UID: \"db05e782-a3d7-4cbe-be3f-f6226d894864\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" Feb 19 15:22:38 crc kubenswrapper[4810]: I0219 15:22:38.087348 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4g952"] Feb 19 15:22:38 crc kubenswrapper[4810]: W0219 15:22:38.094938 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce589619_7c2f_43db_ae4f_fb43be7b07f4.slice/crio-91edc095f5fdd489f1314a6a7986527e74d1fe6916196880ba67c61055949b21 WatchSource:0}: Error finding container 91edc095f5fdd489f1314a6a7986527e74d1fe6916196880ba67c61055949b21: Status 404 returned error can't find the container with id 91edc095f5fdd489f1314a6a7986527e74d1fe6916196880ba67c61055949b21 Feb 19 15:22:38 crc kubenswrapper[4810]: I0219 15:22:38.174228 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6748b4f7c7-98l2r"] Feb 19 15:22:38 crc kubenswrapper[4810]: W0219 15:22:38.179555 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod652c3d69_0b00_46b4_a0d7_752de7f222aa.slice/crio-c028658fc40979a28aacaf03d89e9437caaf30d64d412dcca542628fc9fe1e8d WatchSource:0}: Error finding container c028658fc40979a28aacaf03d89e9437caaf30d64d412dcca542628fc9fe1e8d: Status 404 returned error can't find the container with id c028658fc40979a28aacaf03d89e9437caaf30d64d412dcca542628fc9fe1e8d Feb 19 15:22:38 crc kubenswrapper[4810]: I0219 15:22:38.245474 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" Feb 19 15:22:38 crc kubenswrapper[4810]: I0219 15:22:38.662593 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq"] Feb 19 15:22:38 crc kubenswrapper[4810]: I0219 15:22:38.799036 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:38 crc kubenswrapper[4810]: I0219 15:22:38.850009 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:38 crc kubenswrapper[4810]: I0219 15:22:38.888317 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" event={"ID":"35fc682a-0cc9-4922-a2f2-60da1ddb1eb9","Type":"ContainerStarted","Data":"b859ff50f0ab64c1c062a20f499cf15998f09e60a2f0007efdcb544b17f21d7c"} Feb 19 15:22:38 crc kubenswrapper[4810]: I0219 15:22:38.889835 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" event={"ID":"db05e782-a3d7-4cbe-be3f-f6226d894864","Type":"ContainerStarted","Data":"683e3f66aa41271ffb4a031d473334039956ab097ea62cc75f22695b9ff0d6ba"} Feb 19 15:22:38 crc kubenswrapper[4810]: I0219 15:22:38.891157 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6748b4f7c7-98l2r" event={"ID":"652c3d69-0b00-46b4-a0d7-752de7f222aa","Type":"ContainerStarted","Data":"297b1e417d23fdaad5a457fea57406025723128b0f338f99f0d3982125ef9eba"} Feb 19 15:22:38 crc kubenswrapper[4810]: I0219 15:22:38.891193 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6748b4f7c7-98l2r" event={"ID":"652c3d69-0b00-46b4-a0d7-752de7f222aa","Type":"ContainerStarted","Data":"c028658fc40979a28aacaf03d89e9437caaf30d64d412dcca542628fc9fe1e8d"} Feb 19 15:22:38 crc kubenswrapper[4810]: I0219 15:22:38.893494 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4g952" event={"ID":"ce589619-7c2f-43db-ae4f-fb43be7b07f4","Type":"ContainerStarted","Data":"91edc095f5fdd489f1314a6a7986527e74d1fe6916196880ba67c61055949b21"} Feb 19 15:22:38 crc kubenswrapper[4810]: I0219 15:22:38.919195 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6748b4f7c7-98l2r" podStartSLOduration=1.919171703 podStartE2EDuration="1.919171703s" podCreationTimestamp="2026-02-19 15:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:22:38.906912941 +0000 UTC m=+788.388943105" watchObservedRunningTime="2026-02-19 15:22:38.919171703 +0000 UTC m=+788.401201827" Feb 19 15:22:39 crc kubenswrapper[4810]: I0219 15:22:39.038351 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2mft7"] Feb 19 15:22:39 crc kubenswrapper[4810]: I0219 15:22:39.906899 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2mft7" podUID="132beac7-bb25-4b52-863e-0a113dc6799b" containerName="registry-server" containerID="cri-o://50094d034a84c3ff10e55fa2dc439e3f2fd43638ff98866ab58aa48ac171229c" gracePeriod=2 Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.601164 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.729584 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132beac7-bb25-4b52-863e-0a113dc6799b-catalog-content\") pod \"132beac7-bb25-4b52-863e-0a113dc6799b\" (UID: \"132beac7-bb25-4b52-863e-0a113dc6799b\") " Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.729644 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132beac7-bb25-4b52-863e-0a113dc6799b-utilities\") pod \"132beac7-bb25-4b52-863e-0a113dc6799b\" (UID: \"132beac7-bb25-4b52-863e-0a113dc6799b\") " Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.729694 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbj82\" (UniqueName: \"kubernetes.io/projected/132beac7-bb25-4b52-863e-0a113dc6799b-kube-api-access-wbj82\") pod \"132beac7-bb25-4b52-863e-0a113dc6799b\" (UID: \"132beac7-bb25-4b52-863e-0a113dc6799b\") " Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.730684 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/132beac7-bb25-4b52-863e-0a113dc6799b-utilities" (OuterVolumeSpecName: "utilities") pod "132beac7-bb25-4b52-863e-0a113dc6799b" (UID: "132beac7-bb25-4b52-863e-0a113dc6799b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.733872 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/132beac7-bb25-4b52-863e-0a113dc6799b-kube-api-access-wbj82" (OuterVolumeSpecName: "kube-api-access-wbj82") pod "132beac7-bb25-4b52-863e-0a113dc6799b" (UID: "132beac7-bb25-4b52-863e-0a113dc6799b"). InnerVolumeSpecName "kube-api-access-wbj82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.830653 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132beac7-bb25-4b52-863e-0a113dc6799b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.830947 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbj82\" (UniqueName: \"kubernetes.io/projected/132beac7-bb25-4b52-863e-0a113dc6799b-kube-api-access-wbj82\") on node \"crc\" DevicePath \"\"" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.854447 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/132beac7-bb25-4b52-863e-0a113dc6799b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "132beac7-bb25-4b52-863e-0a113dc6799b" (UID: "132beac7-bb25-4b52-863e-0a113dc6799b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.914863 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4g952" event={"ID":"ce589619-7c2f-43db-ae4f-fb43be7b07f4","Type":"ContainerStarted","Data":"4f64332aafd522674ad097a4a805656cde7bc8e23c7de594c7efe16720e0256c"} Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.916639 4810 generic.go:334] "Generic (PLEG): container finished" podID="132beac7-bb25-4b52-863e-0a113dc6799b" containerID="50094d034a84c3ff10e55fa2dc439e3f2fd43638ff98866ab58aa48ac171229c" exitCode=0 Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.916668 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mft7" event={"ID":"132beac7-bb25-4b52-863e-0a113dc6799b","Type":"ContainerDied","Data":"50094d034a84c3ff10e55fa2dc439e3f2fd43638ff98866ab58aa48ac171229c"} Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.916697 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mft7" event={"ID":"132beac7-bb25-4b52-863e-0a113dc6799b","Type":"ContainerDied","Data":"13841f7eec8b89d2fb39be72b7796fc00732025d2bc4c95c5e8d5b290f1c7821"} Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.916719 4810 scope.go:117] "RemoveContainer" containerID="50094d034a84c3ff10e55fa2dc439e3f2fd43638ff98866ab58aa48ac171229c" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.916727 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mft7" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.920856 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bhhvv" event={"ID":"c0eb0835-6df5-4a21-b309-f178a032d027","Type":"ContainerStarted","Data":"1cd51c9297a58b9b01ee7ad33c1f12591f770b85f4b30d5cc206be5a7fd5e422"} Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.921564 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.924900 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" event={"ID":"35fc682a-0cc9-4922-a2f2-60da1ddb1eb9","Type":"ContainerStarted","Data":"13741502a9cb047e876cde5c8b5c953c930ebb28b193ad72aab1ab20875477e3"} Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.925037 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" event={"ID":"db05e782-a3d7-4cbe-be3f-f6226d894864","Type":"ContainerStarted","Data":"f18ec1c07060a2de47dafe6583ef0b98dceb7586fd9e2945c09eafd1d25b9a40"} Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.925194 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.931918 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132beac7-bb25-4b52-863e-0a113dc6799b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.938763 4810 scope.go:117] "RemoveContainer" containerID="585ee91571796fba78ba8e4f41233d19b09ebef57aa467f35d92b53d4fb2aa16" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.943965 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-bhhvv" podStartSLOduration=1.02416616 podStartE2EDuration="3.943936674s" podCreationTimestamp="2026-02-19 15:22:37 +0000 UTC" firstStartedPulling="2026-02-19 15:22:37.698706686 +0000 UTC m=+787.180736810" lastFinishedPulling="2026-02-19 15:22:40.6184772 +0000 UTC m=+790.100507324" observedRunningTime="2026-02-19 15:22:40.937784532 +0000 UTC m=+790.419814696" watchObservedRunningTime="2026-02-19 15:22:40.943936674 +0000 UTC m=+790.425966848" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.954821 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kdwwx" podStartSLOduration=1.287712536 podStartE2EDuration="3.954800042s" podCreationTimestamp="2026-02-19 15:22:37 +0000 UTC" firstStartedPulling="2026-02-19 15:22:37.951517197 +0000 UTC m=+787.433547321" lastFinishedPulling="2026-02-19 15:22:40.618604703 +0000 UTC m=+790.100634827" observedRunningTime="2026-02-19 15:22:40.953978992 +0000 UTC m=+790.436009166" watchObservedRunningTime="2026-02-19 15:22:40.954800042 +0000 UTC m=+790.436830196" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.977740 4810 scope.go:117] "RemoveContainer" containerID="b5538334e24ef6c83330097d1f5b71ee7b748811fc18ccd063319fa24e2faa21" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.993860 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" podStartSLOduration=2.04986959 podStartE2EDuration="3.993841926s" podCreationTimestamp="2026-02-19 15:22:37 +0000 UTC" firstStartedPulling="2026-02-19 15:22:38.676972285 +0000 UTC m=+788.159002409" lastFinishedPulling="2026-02-19 15:22:40.620944601 +0000 UTC m=+790.102974745" observedRunningTime="2026-02-19 15:22:40.97820116 +0000 UTC m=+790.460231294" watchObservedRunningTime="2026-02-19 15:22:40.993841926 +0000 UTC m=+790.475872060" Feb 19 15:22:40 crc kubenswrapper[4810]: I0219 15:22:40.995997 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2mft7"] Feb 19 15:22:41 crc kubenswrapper[4810]: I0219 15:22:41.001274 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2mft7"] Feb 19 15:22:41 crc kubenswrapper[4810]: I0219 15:22:41.013828 4810 scope.go:117] "RemoveContainer" containerID="50094d034a84c3ff10e55fa2dc439e3f2fd43638ff98866ab58aa48ac171229c" Feb 19 15:22:41 crc kubenswrapper[4810]: E0219 15:22:41.014495 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50094d034a84c3ff10e55fa2dc439e3f2fd43638ff98866ab58aa48ac171229c\": container with ID starting with 50094d034a84c3ff10e55fa2dc439e3f2fd43638ff98866ab58aa48ac171229c not found: ID does not exist" containerID="50094d034a84c3ff10e55fa2dc439e3f2fd43638ff98866ab58aa48ac171229c" Feb 19 15:22:41 crc kubenswrapper[4810]: I0219 15:22:41.014742 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50094d034a84c3ff10e55fa2dc439e3f2fd43638ff98866ab58aa48ac171229c"} err="failed to get container status \"50094d034a84c3ff10e55fa2dc439e3f2fd43638ff98866ab58aa48ac171229c\": rpc error: code = NotFound desc = could not find container \"50094d034a84c3ff10e55fa2dc439e3f2fd43638ff98866ab58aa48ac171229c\": container with ID starting with 50094d034a84c3ff10e55fa2dc439e3f2fd43638ff98866ab58aa48ac171229c not found: ID does not exist" Feb 19 15:22:41 crc kubenswrapper[4810]: I0219 15:22:41.014964 4810 scope.go:117] "RemoveContainer" containerID="585ee91571796fba78ba8e4f41233d19b09ebef57aa467f35d92b53d4fb2aa16" Feb 19 15:22:41 crc kubenswrapper[4810]: E0219 15:22:41.015543 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"585ee91571796fba78ba8e4f41233d19b09ebef57aa467f35d92b53d4fb2aa16\": container with ID starting with 585ee91571796fba78ba8e4f41233d19b09ebef57aa467f35d92b53d4fb2aa16 not found: ID does not exist" containerID="585ee91571796fba78ba8e4f41233d19b09ebef57aa467f35d92b53d4fb2aa16" Feb 19 15:22:41 crc kubenswrapper[4810]: I0219 15:22:41.015582 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"585ee91571796fba78ba8e4f41233d19b09ebef57aa467f35d92b53d4fb2aa16"} err="failed to get container status \"585ee91571796fba78ba8e4f41233d19b09ebef57aa467f35d92b53d4fb2aa16\": rpc error: code = NotFound desc = could not find container \"585ee91571796fba78ba8e4f41233d19b09ebef57aa467f35d92b53d4fb2aa16\": container with ID starting with 585ee91571796fba78ba8e4f41233d19b09ebef57aa467f35d92b53d4fb2aa16 not found: ID does not exist" Feb 19 15:22:41 crc kubenswrapper[4810]: I0219 15:22:41.015609 4810 scope.go:117] "RemoveContainer" containerID="b5538334e24ef6c83330097d1f5b71ee7b748811fc18ccd063319fa24e2faa21" Feb 19 15:22:41 crc kubenswrapper[4810]: E0219 15:22:41.016371 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5538334e24ef6c83330097d1f5b71ee7b748811fc18ccd063319fa24e2faa21\": container with ID starting with b5538334e24ef6c83330097d1f5b71ee7b748811fc18ccd063319fa24e2faa21 not found: ID does not exist" containerID="b5538334e24ef6c83330097d1f5b71ee7b748811fc18ccd063319fa24e2faa21" Feb 19 15:22:41 crc kubenswrapper[4810]: I0219 15:22:41.016418 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5538334e24ef6c83330097d1f5b71ee7b748811fc18ccd063319fa24e2faa21"} err="failed to get container status \"b5538334e24ef6c83330097d1f5b71ee7b748811fc18ccd063319fa24e2faa21\": rpc error: code = NotFound desc = could not find container \"b5538334e24ef6c83330097d1f5b71ee7b748811fc18ccd063319fa24e2faa21\": container with ID starting with b5538334e24ef6c83330097d1f5b71ee7b748811fc18ccd063319fa24e2faa21 not found: ID does not exist" Feb 19 15:22:41 crc kubenswrapper[4810]: I0219 15:22:41.460685 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="132beac7-bb25-4b52-863e-0a113dc6799b" path="/var/lib/kubelet/pods/132beac7-bb25-4b52-863e-0a113dc6799b/volumes" Feb 19 15:22:43 crc kubenswrapper[4810]: I0219 15:22:43.951928 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4g952" event={"ID":"ce589619-7c2f-43db-ae4f-fb43be7b07f4","Type":"ContainerStarted","Data":"5643a79363cc749a73847da14fca76d6e3b9082bb426d8952b68f72c613cc7a7"} Feb 19 15:22:43 crc kubenswrapper[4810]: I0219 15:22:43.982172 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4g952" podStartSLOduration=1.942382976 podStartE2EDuration="6.982148612s" podCreationTimestamp="2026-02-19 15:22:37 +0000 UTC" firstStartedPulling="2026-02-19 15:22:38.096837004 +0000 UTC m=+787.578867148" lastFinishedPulling="2026-02-19 15:22:43.13660262 +0000 UTC m=+792.618632784" observedRunningTime="2026-02-19 15:22:43.978133533 +0000 UTC m=+793.460163697" watchObservedRunningTime="2026-02-19 15:22:43.982148612 +0000 UTC m=+793.464178736" Feb 19 15:22:47 crc kubenswrapper[4810]: I0219 15:22:47.700230 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-bhhvv" Feb 19 15:22:47 crc kubenswrapper[4810]: I0219 15:22:47.992251 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:47 crc kubenswrapper[4810]: I0219 15:22:47.992354 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:48 crc kubenswrapper[4810]: I0219 15:22:48.000194 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:48 crc kubenswrapper[4810]: I0219 15:22:48.996948 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6748b4f7c7-98l2r" Feb 19 15:22:49 crc kubenswrapper[4810]: I0219 15:22:49.079798 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4hddt"] Feb 19 15:22:58 crc kubenswrapper[4810]: I0219 15:22:58.251140 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-ckvvq" Feb 19 15:23:11 crc kubenswrapper[4810]: I0219 15:23:11.946232 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb"] Feb 19 15:23:11 crc kubenswrapper[4810]: E0219 15:23:11.948464 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132beac7-bb25-4b52-863e-0a113dc6799b" containerName="extract-content" Feb 19 15:23:11 crc kubenswrapper[4810]: I0219 15:23:11.948478 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="132beac7-bb25-4b52-863e-0a113dc6799b" containerName="extract-content" Feb 19 15:23:11 crc kubenswrapper[4810]: E0219 15:23:11.948506 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132beac7-bb25-4b52-863e-0a113dc6799b" containerName="extract-utilities" Feb 19 15:23:11 crc kubenswrapper[4810]: I0219 15:23:11.948513 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="132beac7-bb25-4b52-863e-0a113dc6799b" containerName="extract-utilities" Feb 19 15:23:11 crc kubenswrapper[4810]: E0219 15:23:11.948531 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132beac7-bb25-4b52-863e-0a113dc6799b" containerName="registry-server" Feb 19 15:23:11 crc kubenswrapper[4810]: I0219 15:23:11.948537 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="132beac7-bb25-4b52-863e-0a113dc6799b" containerName="registry-server" Feb 19 15:23:11 crc kubenswrapper[4810]: I0219 15:23:11.948648 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="132beac7-bb25-4b52-863e-0a113dc6799b" containerName="registry-server" Feb 19 15:23:11 crc kubenswrapper[4810]: I0219 15:23:11.949393 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" Feb 19 15:23:11 crc kubenswrapper[4810]: I0219 15:23:11.953149 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 15:23:11 crc kubenswrapper[4810]: I0219 15:23:11.958027 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb"] Feb 19 15:23:11 crc kubenswrapper[4810]: I0219 15:23:11.984954 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a861f8a3-be34-4fc0-96cb-42502d0a3bab-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb\" (UID: \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" Feb 19 15:23:11 crc kubenswrapper[4810]: I0219 15:23:11.985013 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a861f8a3-be34-4fc0-96cb-42502d0a3bab-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb\" (UID: \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" Feb 19 15:23:11 crc kubenswrapper[4810]: I0219 15:23:11.985047 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bpbt\" (UniqueName: \"kubernetes.io/projected/a861f8a3-be34-4fc0-96cb-42502d0a3bab-kube-api-access-6bpbt\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb\" (UID: \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" Feb 19 15:23:12 crc kubenswrapper[4810]: I0219 15:23:12.086624 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a861f8a3-be34-4fc0-96cb-42502d0a3bab-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb\" (UID: \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" Feb 19 15:23:12 crc kubenswrapper[4810]: I0219 15:23:12.086924 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bpbt\" (UniqueName: \"kubernetes.io/projected/a861f8a3-be34-4fc0-96cb-42502d0a3bab-kube-api-access-6bpbt\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb\" (UID: \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" Feb 19 15:23:12 crc kubenswrapper[4810]: I0219 15:23:12.087053 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a861f8a3-be34-4fc0-96cb-42502d0a3bab-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb\" (UID: \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" Feb 19 15:23:12 crc kubenswrapper[4810]: I0219 15:23:12.087357 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a861f8a3-be34-4fc0-96cb-42502d0a3bab-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb\" (UID: \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" Feb 19 15:23:12 crc kubenswrapper[4810]: I0219 15:23:12.087447 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a861f8a3-be34-4fc0-96cb-42502d0a3bab-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb\" (UID: \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" Feb 19 15:23:12 crc kubenswrapper[4810]: I0219 15:23:12.135208 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bpbt\" (UniqueName: \"kubernetes.io/projected/a861f8a3-be34-4fc0-96cb-42502d0a3bab-kube-api-access-6bpbt\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb\" (UID: \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" Feb 19 15:23:12 crc kubenswrapper[4810]: I0219 15:23:12.267281 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" Feb 19 15:23:12 crc kubenswrapper[4810]: I0219 15:23:12.717608 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb"] Feb 19 15:23:13 crc kubenswrapper[4810]: I0219 15:23:13.176027 4810 generic.go:334] "Generic (PLEG): container finished" podID="a861f8a3-be34-4fc0-96cb-42502d0a3bab" containerID="9b2fd4a0249047278a14bf1d594fa14cb2acf71a341fef933ac568cfa7ce7a75" exitCode=0 Feb 19 15:23:13 crc kubenswrapper[4810]: I0219 15:23:13.176091 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" event={"ID":"a861f8a3-be34-4fc0-96cb-42502d0a3bab","Type":"ContainerDied","Data":"9b2fd4a0249047278a14bf1d594fa14cb2acf71a341fef933ac568cfa7ce7a75"} Feb 19 15:23:13 crc kubenswrapper[4810]: I0219 15:23:13.176409 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" event={"ID":"a861f8a3-be34-4fc0-96cb-42502d0a3bab","Type":"ContainerStarted","Data":"274e568a39b3ec5f5262ddb4db28a79855f75a688c00d404905429361e70fd1e"} Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.147079 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4hddt" podUID="362cd55c-b576-44bd-843c-078bf26b3b1e" containerName="console" containerID="cri-o://f08283b34555217bd3055e9709afdf48fe7bc7bac3a42fcf9d68868beda18106" gracePeriod=15 Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.608833 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4hddt_362cd55c-b576-44bd-843c-078bf26b3b1e/console/0.log" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.608901 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.625257 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/362cd55c-b576-44bd-843c-078bf26b3b1e-console-serving-cert\") pod \"362cd55c-b576-44bd-843c-078bf26b3b1e\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.625349 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-service-ca\") pod \"362cd55c-b576-44bd-843c-078bf26b3b1e\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.625383 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-oauth-serving-cert\") pod \"362cd55c-b576-44bd-843c-078bf26b3b1e\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.625438 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/362cd55c-b576-44bd-843c-078bf26b3b1e-console-oauth-config\") pod \"362cd55c-b576-44bd-843c-078bf26b3b1e\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.625485 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnqb7\" (UniqueName: \"kubernetes.io/projected/362cd55c-b576-44bd-843c-078bf26b3b1e-kube-api-access-pnqb7\") pod \"362cd55c-b576-44bd-843c-078bf26b3b1e\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.625528 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-trusted-ca-bundle\") pod \"362cd55c-b576-44bd-843c-078bf26b3b1e\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.625573 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-console-config\") pod \"362cd55c-b576-44bd-843c-078bf26b3b1e\" (UID: \"362cd55c-b576-44bd-843c-078bf26b3b1e\") " Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.626369 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "362cd55c-b576-44bd-843c-078bf26b3b1e" (UID: "362cd55c-b576-44bd-843c-078bf26b3b1e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.626413 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-service-ca" (OuterVolumeSpecName: "service-ca") pod "362cd55c-b576-44bd-843c-078bf26b3b1e" (UID: "362cd55c-b576-44bd-843c-078bf26b3b1e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.626611 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "362cd55c-b576-44bd-843c-078bf26b3b1e" (UID: "362cd55c-b576-44bd-843c-078bf26b3b1e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.626920 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-console-config" (OuterVolumeSpecName: "console-config") pod "362cd55c-b576-44bd-843c-078bf26b3b1e" (UID: "362cd55c-b576-44bd-843c-078bf26b3b1e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.631161 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/362cd55c-b576-44bd-843c-078bf26b3b1e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "362cd55c-b576-44bd-843c-078bf26b3b1e" (UID: "362cd55c-b576-44bd-843c-078bf26b3b1e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.633072 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/362cd55c-b576-44bd-843c-078bf26b3b1e-kube-api-access-pnqb7" (OuterVolumeSpecName: "kube-api-access-pnqb7") pod "362cd55c-b576-44bd-843c-078bf26b3b1e" (UID: "362cd55c-b576-44bd-843c-078bf26b3b1e"). InnerVolumeSpecName "kube-api-access-pnqb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.639415 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/362cd55c-b576-44bd-843c-078bf26b3b1e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "362cd55c-b576-44bd-843c-078bf26b3b1e" (UID: "362cd55c-b576-44bd-843c-078bf26b3b1e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.727622 4810 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/362cd55c-b576-44bd-843c-078bf26b3b1e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.727675 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnqb7\" (UniqueName: \"kubernetes.io/projected/362cd55c-b576-44bd-843c-078bf26b3b1e-kube-api-access-pnqb7\") on node \"crc\" DevicePath \"\"" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.727695 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.727712 4810 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.727730 4810 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/362cd55c-b576-44bd-843c-078bf26b3b1e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.727748 4810 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:23:14 crc kubenswrapper[4810]: I0219 15:23:14.727763 4810 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/362cd55c-b576-44bd-843c-078bf26b3b1e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 15:23:15 crc kubenswrapper[4810]: I0219 15:23:15.198517 4810 generic.go:334] "Generic (PLEG): container finished" podID="a861f8a3-be34-4fc0-96cb-42502d0a3bab" containerID="b6e2bcd802e63f5a2e306f8014614133f88f2a3f2f3d28a5c3f931f341252672" exitCode=0 Feb 19 15:23:15 crc kubenswrapper[4810]: I0219 15:23:15.198676 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" event={"ID":"a861f8a3-be34-4fc0-96cb-42502d0a3bab","Type":"ContainerDied","Data":"b6e2bcd802e63f5a2e306f8014614133f88f2a3f2f3d28a5c3f931f341252672"} Feb 19 15:23:15 crc kubenswrapper[4810]: I0219 15:23:15.202015 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4hddt_362cd55c-b576-44bd-843c-078bf26b3b1e/console/0.log" Feb 19 15:23:15 crc kubenswrapper[4810]: I0219 15:23:15.202102 4810 generic.go:334] "Generic (PLEG): container finished" podID="362cd55c-b576-44bd-843c-078bf26b3b1e" containerID="f08283b34555217bd3055e9709afdf48fe7bc7bac3a42fcf9d68868beda18106" exitCode=2 Feb 19 15:23:15 crc kubenswrapper[4810]: I0219 15:23:15.202156 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4hddt" event={"ID":"362cd55c-b576-44bd-843c-078bf26b3b1e","Type":"ContainerDied","Data":"f08283b34555217bd3055e9709afdf48fe7bc7bac3a42fcf9d68868beda18106"} Feb 19 15:23:15 crc kubenswrapper[4810]: I0219 15:23:15.202240 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4hddt" event={"ID":"362cd55c-b576-44bd-843c-078bf26b3b1e","Type":"ContainerDied","Data":"a1d8a2975e22eb56e23640790355f60287c10a0504259d614d431ce0dc78edbb"} Feb 19 15:23:15 crc kubenswrapper[4810]: I0219 15:23:15.202242 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4hddt" Feb 19 15:23:15 crc kubenswrapper[4810]: I0219 15:23:15.202278 4810 scope.go:117] "RemoveContainer" containerID="f08283b34555217bd3055e9709afdf48fe7bc7bac3a42fcf9d68868beda18106" Feb 19 15:23:15 crc kubenswrapper[4810]: I0219 15:23:15.241642 4810 scope.go:117] "RemoveContainer" containerID="f08283b34555217bd3055e9709afdf48fe7bc7bac3a42fcf9d68868beda18106" Feb 19 15:23:15 crc kubenswrapper[4810]: E0219 15:23:15.242516 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f08283b34555217bd3055e9709afdf48fe7bc7bac3a42fcf9d68868beda18106\": container with ID starting with f08283b34555217bd3055e9709afdf48fe7bc7bac3a42fcf9d68868beda18106 not found: ID does not exist" containerID="f08283b34555217bd3055e9709afdf48fe7bc7bac3a42fcf9d68868beda18106" Feb 19 15:23:15 crc kubenswrapper[4810]: I0219 15:23:15.242799 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f08283b34555217bd3055e9709afdf48fe7bc7bac3a42fcf9d68868beda18106"} err="failed to get container status \"f08283b34555217bd3055e9709afdf48fe7bc7bac3a42fcf9d68868beda18106\": rpc error: code = NotFound desc = could not find container \"f08283b34555217bd3055e9709afdf48fe7bc7bac3a42fcf9d68868beda18106\": container with ID starting with f08283b34555217bd3055e9709afdf48fe7bc7bac3a42fcf9d68868beda18106 not found: ID does not exist" Feb 19 15:23:15 crc kubenswrapper[4810]: I0219 15:23:15.260794 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4hddt"] Feb 19 15:23:15 crc kubenswrapper[4810]: I0219 15:23:15.264522 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4hddt"] Feb 19 15:23:15 crc kubenswrapper[4810]: I0219 15:23:15.453979 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="362cd55c-b576-44bd-843c-078bf26b3b1e" path="/var/lib/kubelet/pods/362cd55c-b576-44bd-843c-078bf26b3b1e/volumes" Feb 19 15:23:16 crc kubenswrapper[4810]: I0219 15:23:16.216396 4810 generic.go:334] "Generic (PLEG): container finished" podID="a861f8a3-be34-4fc0-96cb-42502d0a3bab" containerID="814a9f55e6d8349a2b1db9b86dbab3e606bdea70dc2cb74e2614b5f286c6442a" exitCode=0 Feb 19 15:23:16 crc kubenswrapper[4810]: I0219 15:23:16.216535 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" event={"ID":"a861f8a3-be34-4fc0-96cb-42502d0a3bab","Type":"ContainerDied","Data":"814a9f55e6d8349a2b1db9b86dbab3e606bdea70dc2cb74e2614b5f286c6442a"} Feb 19 15:23:17 crc kubenswrapper[4810]: I0219 15:23:17.479263 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" Feb 19 15:23:17 crc kubenswrapper[4810]: I0219 15:23:17.568452 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a861f8a3-be34-4fc0-96cb-42502d0a3bab-util\") pod \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\" (UID: \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\") " Feb 19 15:23:17 crc kubenswrapper[4810]: I0219 15:23:17.568545 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a861f8a3-be34-4fc0-96cb-42502d0a3bab-bundle\") pod \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\" (UID: \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\") " Feb 19 15:23:17 crc kubenswrapper[4810]: I0219 15:23:17.568616 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bpbt\" (UniqueName: \"kubernetes.io/projected/a861f8a3-be34-4fc0-96cb-42502d0a3bab-kube-api-access-6bpbt\") pod \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\" (UID: \"a861f8a3-be34-4fc0-96cb-42502d0a3bab\") " Feb 19 15:23:17 crc kubenswrapper[4810]: I0219 15:23:17.570183 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a861f8a3-be34-4fc0-96cb-42502d0a3bab-bundle" (OuterVolumeSpecName: "bundle") pod "a861f8a3-be34-4fc0-96cb-42502d0a3bab" (UID: "a861f8a3-be34-4fc0-96cb-42502d0a3bab"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:23:17 crc kubenswrapper[4810]: I0219 15:23:17.575167 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a861f8a3-be34-4fc0-96cb-42502d0a3bab-kube-api-access-6bpbt" (OuterVolumeSpecName: "kube-api-access-6bpbt") pod "a861f8a3-be34-4fc0-96cb-42502d0a3bab" (UID: "a861f8a3-be34-4fc0-96cb-42502d0a3bab"). InnerVolumeSpecName "kube-api-access-6bpbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:23:17 crc kubenswrapper[4810]: I0219 15:23:17.600278 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a861f8a3-be34-4fc0-96cb-42502d0a3bab-util" (OuterVolumeSpecName: "util") pod "a861f8a3-be34-4fc0-96cb-42502d0a3bab" (UID: "a861f8a3-be34-4fc0-96cb-42502d0a3bab"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:23:17 crc kubenswrapper[4810]: I0219 15:23:17.670291 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bpbt\" (UniqueName: \"kubernetes.io/projected/a861f8a3-be34-4fc0-96cb-42502d0a3bab-kube-api-access-6bpbt\") on node \"crc\" DevicePath \"\"" Feb 19 15:23:17 crc kubenswrapper[4810]: I0219 15:23:17.670361 4810 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a861f8a3-be34-4fc0-96cb-42502d0a3bab-util\") on node \"crc\" DevicePath \"\"" Feb 19 15:23:17 crc kubenswrapper[4810]: I0219 15:23:17.670380 4810 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a861f8a3-be34-4fc0-96cb-42502d0a3bab-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:23:18 crc kubenswrapper[4810]: I0219 15:23:18.248473 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" event={"ID":"a861f8a3-be34-4fc0-96cb-42502d0a3bab","Type":"ContainerDied","Data":"274e568a39b3ec5f5262ddb4db28a79855f75a688c00d404905429361e70fd1e"} Feb 19 15:23:18 crc kubenswrapper[4810]: I0219 15:23:18.248557 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="274e568a39b3ec5f5262ddb4db28a79855f75a688c00d404905429361e70fd1e" Feb 19 15:23:18 crc kubenswrapper[4810]: I0219 15:23:18.248729 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.449738 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8"] Feb 19 15:23:28 crc kubenswrapper[4810]: E0219 15:23:28.450552 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a861f8a3-be34-4fc0-96cb-42502d0a3bab" containerName="extract" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.450569 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a861f8a3-be34-4fc0-96cb-42502d0a3bab" containerName="extract" Feb 19 15:23:28 crc kubenswrapper[4810]: E0219 15:23:28.450584 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a861f8a3-be34-4fc0-96cb-42502d0a3bab" containerName="pull" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.450591 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a861f8a3-be34-4fc0-96cb-42502d0a3bab" containerName="pull" Feb 19 15:23:28 crc kubenswrapper[4810]: E0219 15:23:28.450605 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362cd55c-b576-44bd-843c-078bf26b3b1e" containerName="console" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.450613 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="362cd55c-b576-44bd-843c-078bf26b3b1e" containerName="console" Feb 19 15:23:28 crc kubenswrapper[4810]: E0219 15:23:28.450624 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a861f8a3-be34-4fc0-96cb-42502d0a3bab" containerName="util" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.450630 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a861f8a3-be34-4fc0-96cb-42502d0a3bab" containerName="util" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.450732 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="362cd55c-b576-44bd-843c-078bf26b3b1e" containerName="console" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.450745 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a861f8a3-be34-4fc0-96cb-42502d0a3bab" containerName="extract" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.451154 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.453411 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.455807 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.455979 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fp427" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.456058 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.456112 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.513687 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8"] Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.535433 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f26047c7-b8cc-4ce2-8a48-4b380ab225c0-apiservice-cert\") pod \"metallb-operator-controller-manager-75f48c59dc-m5vm8\" (UID: \"f26047c7-b8cc-4ce2-8a48-4b380ab225c0\") " pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.535541 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z875r\" (UniqueName: \"kubernetes.io/projected/f26047c7-b8cc-4ce2-8a48-4b380ab225c0-kube-api-access-z875r\") pod \"metallb-operator-controller-manager-75f48c59dc-m5vm8\" (UID: \"f26047c7-b8cc-4ce2-8a48-4b380ab225c0\") " pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.535565 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f26047c7-b8cc-4ce2-8a48-4b380ab225c0-webhook-cert\") pod \"metallb-operator-controller-manager-75f48c59dc-m5vm8\" (UID: \"f26047c7-b8cc-4ce2-8a48-4b380ab225c0\") " pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.636521 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z875r\" (UniqueName: \"kubernetes.io/projected/f26047c7-b8cc-4ce2-8a48-4b380ab225c0-kube-api-access-z875r\") pod \"metallb-operator-controller-manager-75f48c59dc-m5vm8\" (UID: \"f26047c7-b8cc-4ce2-8a48-4b380ab225c0\") " pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.636576 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f26047c7-b8cc-4ce2-8a48-4b380ab225c0-webhook-cert\") pod \"metallb-operator-controller-manager-75f48c59dc-m5vm8\" (UID: \"f26047c7-b8cc-4ce2-8a48-4b380ab225c0\") " pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.636614 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f26047c7-b8cc-4ce2-8a48-4b380ab225c0-apiservice-cert\") pod \"metallb-operator-controller-manager-75f48c59dc-m5vm8\" (UID: \"f26047c7-b8cc-4ce2-8a48-4b380ab225c0\") " pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.642955 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f26047c7-b8cc-4ce2-8a48-4b380ab225c0-apiservice-cert\") pod \"metallb-operator-controller-manager-75f48c59dc-m5vm8\" (UID: \"f26047c7-b8cc-4ce2-8a48-4b380ab225c0\") " pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.652138 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f26047c7-b8cc-4ce2-8a48-4b380ab225c0-webhook-cert\") pod \"metallb-operator-controller-manager-75f48c59dc-m5vm8\" (UID: \"f26047c7-b8cc-4ce2-8a48-4b380ab225c0\") " pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.675840 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z875r\" (UniqueName: \"kubernetes.io/projected/f26047c7-b8cc-4ce2-8a48-4b380ab225c0-kube-api-access-z875r\") pod \"metallb-operator-controller-manager-75f48c59dc-m5vm8\" (UID: \"f26047c7-b8cc-4ce2-8a48-4b380ab225c0\") " pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.768069 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.794667 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c"] Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.797049 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.800522 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-gt529" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.800827 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.805101 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.812690 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c"] Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.838894 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d62866f-b047-419d-8eb0-848b0df84e63-apiservice-cert\") pod \"metallb-operator-webhook-server-595d5f7545-vfb4c\" (UID: \"3d62866f-b047-419d-8eb0-848b0df84e63\") " pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.838943 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt545\" (UniqueName: \"kubernetes.io/projected/3d62866f-b047-419d-8eb0-848b0df84e63-kube-api-access-xt545\") pod \"metallb-operator-webhook-server-595d5f7545-vfb4c\" (UID: \"3d62866f-b047-419d-8eb0-848b0df84e63\") " pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.838999 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d62866f-b047-419d-8eb0-848b0df84e63-webhook-cert\") pod \"metallb-operator-webhook-server-595d5f7545-vfb4c\" (UID: \"3d62866f-b047-419d-8eb0-848b0df84e63\") " pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.940817 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d62866f-b047-419d-8eb0-848b0df84e63-apiservice-cert\") pod \"metallb-operator-webhook-server-595d5f7545-vfb4c\" (UID: \"3d62866f-b047-419d-8eb0-848b0df84e63\") " pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.940860 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt545\" (UniqueName: \"kubernetes.io/projected/3d62866f-b047-419d-8eb0-848b0df84e63-kube-api-access-xt545\") pod \"metallb-operator-webhook-server-595d5f7545-vfb4c\" (UID: \"3d62866f-b047-419d-8eb0-848b0df84e63\") " pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.940914 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d62866f-b047-419d-8eb0-848b0df84e63-webhook-cert\") pod \"metallb-operator-webhook-server-595d5f7545-vfb4c\" (UID: \"3d62866f-b047-419d-8eb0-848b0df84e63\") " pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.957675 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d62866f-b047-419d-8eb0-848b0df84e63-webhook-cert\") pod \"metallb-operator-webhook-server-595d5f7545-vfb4c\" (UID: \"3d62866f-b047-419d-8eb0-848b0df84e63\") " pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.961000 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt545\" (UniqueName: \"kubernetes.io/projected/3d62866f-b047-419d-8eb0-848b0df84e63-kube-api-access-xt545\") pod \"metallb-operator-webhook-server-595d5f7545-vfb4c\" (UID: \"3d62866f-b047-419d-8eb0-848b0df84e63\") " pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" Feb 19 15:23:28 crc kubenswrapper[4810]: I0219 15:23:28.962916 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d62866f-b047-419d-8eb0-848b0df84e63-apiservice-cert\") pod \"metallb-operator-webhook-server-595d5f7545-vfb4c\" (UID: \"3d62866f-b047-419d-8eb0-848b0df84e63\") " pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" Feb 19 15:23:29 crc kubenswrapper[4810]: I0219 15:23:29.084785 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8"] Feb 19 15:23:29 crc kubenswrapper[4810]: I0219 15:23:29.147888 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" Feb 19 15:23:29 crc kubenswrapper[4810]: I0219 15:23:29.323508 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" event={"ID":"f26047c7-b8cc-4ce2-8a48-4b380ab225c0","Type":"ContainerStarted","Data":"5dd9f3097919a706ca8eb8ca45cd8e19a5d942ce0b5dadcdec6bc31b84aebcc0"} Feb 19 15:23:29 crc kubenswrapper[4810]: I0219 15:23:29.355501 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c"] Feb 19 15:23:30 crc kubenswrapper[4810]: I0219 15:23:30.331313 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" event={"ID":"3d62866f-b047-419d-8eb0-848b0df84e63","Type":"ContainerStarted","Data":"131c744b5c6df9c014ae6acbb2107c63839e5f4efb0421c5ef2239d7606c84be"} Feb 19 15:23:34 crc kubenswrapper[4810]: I0219 15:23:34.359882 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" event={"ID":"f26047c7-b8cc-4ce2-8a48-4b380ab225c0","Type":"ContainerStarted","Data":"810b857650a6a1110481e1e3a731ff3c0ad5950ad9aea829f16d3edcc1491599"} Feb 19 15:23:34 crc kubenswrapper[4810]: I0219 15:23:34.360488 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" Feb 19 15:23:34 crc kubenswrapper[4810]: I0219 15:23:34.361722 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" event={"ID":"3d62866f-b047-419d-8eb0-848b0df84e63","Type":"ContainerStarted","Data":"f54462b9ff2ee7096d94d8c2b86235acd7f13dd74d2f35042e868d77f5ec3490"} Feb 19 15:23:34 crc kubenswrapper[4810]: I0219 15:23:34.361891 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" Feb 19 15:23:34 crc kubenswrapper[4810]: I0219 15:23:34.383743 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" podStartSLOduration=1.4508706980000001 podStartE2EDuration="6.383721085s" podCreationTimestamp="2026-02-19 15:23:28 +0000 UTC" firstStartedPulling="2026-02-19 15:23:29.092746808 +0000 UTC m=+838.574776932" lastFinishedPulling="2026-02-19 15:23:34.025597185 +0000 UTC m=+843.507627319" observedRunningTime="2026-02-19 15:23:34.376684711 +0000 UTC m=+843.858714835" watchObservedRunningTime="2026-02-19 15:23:34.383721085 +0000 UTC m=+843.865751209" Feb 19 15:23:34 crc kubenswrapper[4810]: I0219 15:23:34.404871 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" podStartSLOduration=1.722910632 podStartE2EDuration="6.404834696s" podCreationTimestamp="2026-02-19 15:23:28 +0000 UTC" firstStartedPulling="2026-02-19 15:23:29.362877635 +0000 UTC m=+838.844907759" lastFinishedPulling="2026-02-19 15:23:34.044801689 +0000 UTC m=+843.526831823" observedRunningTime="2026-02-19 15:23:34.39770452 +0000 UTC m=+843.879734664" watchObservedRunningTime="2026-02-19 15:23:34.404834696 +0000 UTC m=+843.886864820" Feb 19 15:23:49 crc kubenswrapper[4810]: I0219 15:23:49.152783 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-595d5f7545-vfb4c" Feb 19 15:24:08 crc kubenswrapper[4810]: I0219 15:24:08.773374 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-75f48c59dc-m5vm8" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.536797 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-7rbxk"] Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.552098 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24"] Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.552854 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.553416 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.558728 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.558907 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5d75x" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.559035 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.561541 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.569404 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24"] Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.643841 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-hllgd"] Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.644954 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hllgd" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.650025 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.650642 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.650850 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-hzx2c" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.652680 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.667209 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-jngcz"] Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.668529 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jngcz" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.670552 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.698506 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jngcz"] Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.705435 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8csk\" (UniqueName: \"kubernetes.io/projected/1ee9f8f3-05a8-4648-b48d-4975285346d7-kube-api-access-v8csk\") pod \"frr-k8s-webhook-server-78b44bf5bb-cwj24\" (UID: \"1ee9f8f3-05a8-4648-b48d-4975285346d7\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.705521 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/66c7e596-ffa3-4687-8c80-21acecbd8075-frr-conf\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.705552 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/66c7e596-ffa3-4687-8c80-21acecbd8075-frr-startup\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.705572 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/66c7e596-ffa3-4687-8c80-21acecbd8075-reloader\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.705588 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/66c7e596-ffa3-4687-8c80-21acecbd8075-metrics\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.705612 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5kk5\" (UniqueName: \"kubernetes.io/projected/66c7e596-ffa3-4687-8c80-21acecbd8075-kube-api-access-w5kk5\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.705656 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/66c7e596-ffa3-4687-8c80-21acecbd8075-frr-sockets\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.705679 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66c7e596-ffa3-4687-8c80-21acecbd8075-metrics-certs\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.705697 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ee9f8f3-05a8-4648-b48d-4975285346d7-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-cwj24\" (UID: \"1ee9f8f3-05a8-4648-b48d-4975285346d7\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807198 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/66c7e596-ffa3-4687-8c80-21acecbd8075-frr-sockets\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807264 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66c7e596-ffa3-4687-8c80-21acecbd8075-metrics-certs\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807296 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/781d467e-8522-43a3-a552-1ceebc40cddd-metrics-certs\") pod \"controller-69bbfbf88f-jngcz\" (UID: \"781d467e-8522-43a3-a552-1ceebc40cddd\") " pod="metallb-system/controller-69bbfbf88f-jngcz" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807373 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ee9f8f3-05a8-4648-b48d-4975285346d7-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-cwj24\" (UID: \"1ee9f8f3-05a8-4648-b48d-4975285346d7\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807406 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8csk\" (UniqueName: \"kubernetes.io/projected/1ee9f8f3-05a8-4648-b48d-4975285346d7-kube-api-access-v8csk\") pod \"frr-k8s-webhook-server-78b44bf5bb-cwj24\" (UID: \"1ee9f8f3-05a8-4648-b48d-4975285346d7\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807436 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl7wn\" (UniqueName: \"kubernetes.io/projected/c9d97974-67d2-42e5-89fe-b6db106a47c4-kube-api-access-gl7wn\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807467 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/66c7e596-ffa3-4687-8c80-21acecbd8075-frr-conf\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807490 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqkg5\" (UniqueName: \"kubernetes.io/projected/781d467e-8522-43a3-a552-1ceebc40cddd-kube-api-access-zqkg5\") pod \"controller-69bbfbf88f-jngcz\" (UID: \"781d467e-8522-43a3-a552-1ceebc40cddd\") " pod="metallb-system/controller-69bbfbf88f-jngcz" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807513 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/66c7e596-ffa3-4687-8c80-21acecbd8075-frr-startup\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807539 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/66c7e596-ffa3-4687-8c80-21acecbd8075-reloader\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807561 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c9d97974-67d2-42e5-89fe-b6db106a47c4-metallb-excludel2\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807583 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/66c7e596-ffa3-4687-8c80-21acecbd8075-metrics\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807618 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5kk5\" (UniqueName: \"kubernetes.io/projected/66c7e596-ffa3-4687-8c80-21acecbd8075-kube-api-access-w5kk5\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807648 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/781d467e-8522-43a3-a552-1ceebc40cddd-cert\") pod \"controller-69bbfbf88f-jngcz\" (UID: \"781d467e-8522-43a3-a552-1ceebc40cddd\") " pod="metallb-system/controller-69bbfbf88f-jngcz" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807673 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c9d97974-67d2-42e5-89fe-b6db106a47c4-memberlist\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.807697 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9d97974-67d2-42e5-89fe-b6db106a47c4-metrics-certs\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.808603 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/66c7e596-ffa3-4687-8c80-21acecbd8075-frr-sockets\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.808662 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/66c7e596-ffa3-4687-8c80-21acecbd8075-metrics\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.808753 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/66c7e596-ffa3-4687-8c80-21acecbd8075-frr-conf\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.808788 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/66c7e596-ffa3-4687-8c80-21acecbd8075-reloader\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.809133 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/66c7e596-ffa3-4687-8c80-21acecbd8075-frr-startup\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.817055 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ee9f8f3-05a8-4648-b48d-4975285346d7-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-cwj24\" (UID: \"1ee9f8f3-05a8-4648-b48d-4975285346d7\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.829852 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66c7e596-ffa3-4687-8c80-21acecbd8075-metrics-certs\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.832465 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5kk5\" (UniqueName: \"kubernetes.io/projected/66c7e596-ffa3-4687-8c80-21acecbd8075-kube-api-access-w5kk5\") pod \"frr-k8s-7rbxk\" (UID: \"66c7e596-ffa3-4687-8c80-21acecbd8075\") " pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.834004 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8csk\" (UniqueName: \"kubernetes.io/projected/1ee9f8f3-05a8-4648-b48d-4975285346d7-kube-api-access-v8csk\") pod \"frr-k8s-webhook-server-78b44bf5bb-cwj24\" (UID: \"1ee9f8f3-05a8-4648-b48d-4975285346d7\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.882751 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.894948 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.913355 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/781d467e-8522-43a3-a552-1ceebc40cddd-metrics-certs\") pod \"controller-69bbfbf88f-jngcz\" (UID: \"781d467e-8522-43a3-a552-1ceebc40cddd\") " pod="metallb-system/controller-69bbfbf88f-jngcz" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.913405 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl7wn\" (UniqueName: \"kubernetes.io/projected/c9d97974-67d2-42e5-89fe-b6db106a47c4-kube-api-access-gl7wn\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.913437 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqkg5\" (UniqueName: \"kubernetes.io/projected/781d467e-8522-43a3-a552-1ceebc40cddd-kube-api-access-zqkg5\") pod \"controller-69bbfbf88f-jngcz\" (UID: \"781d467e-8522-43a3-a552-1ceebc40cddd\") " pod="metallb-system/controller-69bbfbf88f-jngcz" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.913471 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c9d97974-67d2-42e5-89fe-b6db106a47c4-metallb-excludel2\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.913511 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/781d467e-8522-43a3-a552-1ceebc40cddd-cert\") pod \"controller-69bbfbf88f-jngcz\" (UID: \"781d467e-8522-43a3-a552-1ceebc40cddd\") " pod="metallb-system/controller-69bbfbf88f-jngcz" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.913531 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c9d97974-67d2-42e5-89fe-b6db106a47c4-memberlist\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.913550 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9d97974-67d2-42e5-89fe-b6db106a47c4-metrics-certs\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:09 crc kubenswrapper[4810]: E0219 15:24:09.913947 4810 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 15:24:09 crc kubenswrapper[4810]: E0219 15:24:09.914031 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9d97974-67d2-42e5-89fe-b6db106a47c4-memberlist podName:c9d97974-67d2-42e5-89fe-b6db106a47c4 nodeName:}" failed. No retries permitted until 2026-02-19 15:24:10.414008828 +0000 UTC m=+879.896038952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c9d97974-67d2-42e5-89fe-b6db106a47c4-memberlist") pod "speaker-hllgd" (UID: "c9d97974-67d2-42e5-89fe-b6db106a47c4") : secret "metallb-memberlist" not found Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.917209 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9d97974-67d2-42e5-89fe-b6db106a47c4-metrics-certs\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.919191 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/781d467e-8522-43a3-a552-1ceebc40cddd-metrics-certs\") pod \"controller-69bbfbf88f-jngcz\" (UID: \"781d467e-8522-43a3-a552-1ceebc40cddd\") " pod="metallb-system/controller-69bbfbf88f-jngcz" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.920055 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.922120 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c9d97974-67d2-42e5-89fe-b6db106a47c4-metallb-excludel2\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.926923 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/781d467e-8522-43a3-a552-1ceebc40cddd-cert\") pod \"controller-69bbfbf88f-jngcz\" (UID: \"781d467e-8522-43a3-a552-1ceebc40cddd\") " pod="metallb-system/controller-69bbfbf88f-jngcz" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.931674 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl7wn\" (UniqueName: \"kubernetes.io/projected/c9d97974-67d2-42e5-89fe-b6db106a47c4-kube-api-access-gl7wn\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.932159 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqkg5\" (UniqueName: \"kubernetes.io/projected/781d467e-8522-43a3-a552-1ceebc40cddd-kube-api-access-zqkg5\") pod \"controller-69bbfbf88f-jngcz\" (UID: \"781d467e-8522-43a3-a552-1ceebc40cddd\") " pod="metallb-system/controller-69bbfbf88f-jngcz" Feb 19 15:24:09 crc kubenswrapper[4810]: I0219 15:24:09.984443 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jngcz" Feb 19 15:24:10 crc kubenswrapper[4810]: I0219 15:24:10.300485 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24"] Feb 19 15:24:10 crc kubenswrapper[4810]: W0219 15:24:10.309390 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ee9f8f3_05a8_4648_b48d_4975285346d7.slice/crio-274b78254a5e1abda79bd4f33f25dc39c7075df4aacc7058ceb43d19c42c0fac WatchSource:0}: Error finding container 274b78254a5e1abda79bd4f33f25dc39c7075df4aacc7058ceb43d19c42c0fac: Status 404 returned error can't find the container with id 274b78254a5e1abda79bd4f33f25dc39c7075df4aacc7058ceb43d19c42c0fac Feb 19 15:24:10 crc kubenswrapper[4810]: I0219 15:24:10.399577 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jngcz"] Feb 19 15:24:10 crc kubenswrapper[4810]: W0219 15:24:10.407131 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod781d467e_8522_43a3_a552_1ceebc40cddd.slice/crio-37ae2f1281734abb3dd1fb3e9f5ecbe4af3939d71b62f2c26fff801e9564e799 WatchSource:0}: Error finding container 37ae2f1281734abb3dd1fb3e9f5ecbe4af3939d71b62f2c26fff801e9564e799: Status 404 returned error can't find the container with id 37ae2f1281734abb3dd1fb3e9f5ecbe4af3939d71b62f2c26fff801e9564e799 Feb 19 15:24:10 crc kubenswrapper[4810]: I0219 15:24:10.420561 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c9d97974-67d2-42e5-89fe-b6db106a47c4-memberlist\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:10 crc kubenswrapper[4810]: E0219 15:24:10.420737 4810 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 15:24:10 crc kubenswrapper[4810]: E0219 15:24:10.420811 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9d97974-67d2-42e5-89fe-b6db106a47c4-memberlist podName:c9d97974-67d2-42e5-89fe-b6db106a47c4 nodeName:}" failed. No retries permitted until 2026-02-19 15:24:11.420789898 +0000 UTC m=+880.902820022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c9d97974-67d2-42e5-89fe-b6db106a47c4-memberlist") pod "speaker-hllgd" (UID: "c9d97974-67d2-42e5-89fe-b6db106a47c4") : secret "metallb-memberlist" not found Feb 19 15:24:10 crc kubenswrapper[4810]: I0219 15:24:10.609422 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jngcz" event={"ID":"781d467e-8522-43a3-a552-1ceebc40cddd","Type":"ContainerStarted","Data":"9c2c5945aa86b126530081c5a05f63b9e950122cbc46898cd1ca1628bff224f8"} Feb 19 15:24:10 crc kubenswrapper[4810]: I0219 15:24:10.609964 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jngcz" event={"ID":"781d467e-8522-43a3-a552-1ceebc40cddd","Type":"ContainerStarted","Data":"37ae2f1281734abb3dd1fb3e9f5ecbe4af3939d71b62f2c26fff801e9564e799"} Feb 19 15:24:10 crc kubenswrapper[4810]: I0219 15:24:10.610564 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24" event={"ID":"1ee9f8f3-05a8-4648-b48d-4975285346d7","Type":"ContainerStarted","Data":"274b78254a5e1abda79bd4f33f25dc39c7075df4aacc7058ceb43d19c42c0fac"} Feb 19 15:24:10 crc kubenswrapper[4810]: I0219 15:24:10.611621 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7rbxk" event={"ID":"66c7e596-ffa3-4687-8c80-21acecbd8075","Type":"ContainerStarted","Data":"8fa2c12028d5ed3f09c76f7e66de9e316bec80a75cb33de0fe7b0642d2605373"} Feb 19 15:24:11 crc kubenswrapper[4810]: I0219 15:24:11.436448 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c9d97974-67d2-42e5-89fe-b6db106a47c4-memberlist\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:11 crc kubenswrapper[4810]: I0219 15:24:11.456791 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c9d97974-67d2-42e5-89fe-b6db106a47c4-memberlist\") pod \"speaker-hllgd\" (UID: \"c9d97974-67d2-42e5-89fe-b6db106a47c4\") " pod="metallb-system/speaker-hllgd" Feb 19 15:24:11 crc kubenswrapper[4810]: I0219 15:24:11.462646 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hllgd" Feb 19 15:24:11 crc kubenswrapper[4810]: W0219 15:24:11.531579 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9d97974_67d2_42e5_89fe_b6db106a47c4.slice/crio-eb3c0b5353d5abfc159f28110e22d7cba48bff35f08c3420e5afa93f8904566c WatchSource:0}: Error finding container eb3c0b5353d5abfc159f28110e22d7cba48bff35f08c3420e5afa93f8904566c: Status 404 returned error can't find the container with id eb3c0b5353d5abfc159f28110e22d7cba48bff35f08c3420e5afa93f8904566c Feb 19 15:24:11 crc kubenswrapper[4810]: I0219 15:24:11.619728 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jngcz" event={"ID":"781d467e-8522-43a3-a552-1ceebc40cddd","Type":"ContainerStarted","Data":"14156917ac66334fd3bc1a6be64cb2045dd223ce40ab384af4688fef71fa71f8"} Feb 19 15:24:11 crc kubenswrapper[4810]: I0219 15:24:11.619987 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-jngcz" Feb 19 15:24:11 crc kubenswrapper[4810]: I0219 15:24:11.621290 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hllgd" event={"ID":"c9d97974-67d2-42e5-89fe-b6db106a47c4","Type":"ContainerStarted","Data":"eb3c0b5353d5abfc159f28110e22d7cba48bff35f08c3420e5afa93f8904566c"} Feb 19 15:24:11 crc kubenswrapper[4810]: I0219 15:24:11.638639 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-jngcz" podStartSLOduration=2.638622121 podStartE2EDuration="2.638622121s" podCreationTimestamp="2026-02-19 15:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:24:11.636635992 +0000 UTC m=+881.118666116" watchObservedRunningTime="2026-02-19 15:24:11.638622121 +0000 UTC m=+881.120652245" Feb 19 15:24:12 crc kubenswrapper[4810]: I0219 15:24:12.631984 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hllgd" event={"ID":"c9d97974-67d2-42e5-89fe-b6db106a47c4","Type":"ContainerStarted","Data":"134104236f48a53956d890285b237a5508919997d691fed7f558b3fd42bca024"} Feb 19 15:24:12 crc kubenswrapper[4810]: I0219 15:24:12.632040 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hllgd" event={"ID":"c9d97974-67d2-42e5-89fe-b6db106a47c4","Type":"ContainerStarted","Data":"47a20248c1a45b8d96de33c2bc69ba29568be3e2e976fc1017e8757777318ad6"} Feb 19 15:24:12 crc kubenswrapper[4810]: I0219 15:24:12.632128 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-hllgd" Feb 19 15:24:12 crc kubenswrapper[4810]: I0219 15:24:12.657898 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-hllgd" podStartSLOduration=3.657880041 podStartE2EDuration="3.657880041s" podCreationTimestamp="2026-02-19 15:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:24:12.652342814 +0000 UTC m=+882.134372928" watchObservedRunningTime="2026-02-19 15:24:12.657880041 +0000 UTC m=+882.139910165" Feb 19 15:24:18 crc kubenswrapper[4810]: I0219 15:24:18.682820 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24" event={"ID":"1ee9f8f3-05a8-4648-b48d-4975285346d7","Type":"ContainerStarted","Data":"73cf8d14acab7446a45fab0ac5fbc8426dcfb7b622c6a2e7262335444e0db051"} Feb 19 15:24:18 crc kubenswrapper[4810]: I0219 15:24:18.683604 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24" Feb 19 15:24:18 crc kubenswrapper[4810]: I0219 15:24:18.685156 4810 generic.go:334] "Generic (PLEG): container finished" podID="66c7e596-ffa3-4687-8c80-21acecbd8075" containerID="cbc70f1e7e14eb1ce82644cc3a99ef44920da0cedb66b4a74d35e9510b2f0d1d" exitCode=0 Feb 19 15:24:18 crc kubenswrapper[4810]: I0219 15:24:18.685242 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7rbxk" event={"ID":"66c7e596-ffa3-4687-8c80-21acecbd8075","Type":"ContainerDied","Data":"cbc70f1e7e14eb1ce82644cc3a99ef44920da0cedb66b4a74d35e9510b2f0d1d"} Feb 19 15:24:18 crc kubenswrapper[4810]: I0219 15:24:18.707672 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24" podStartSLOduration=2.341561547 podStartE2EDuration="9.707642259s" podCreationTimestamp="2026-02-19 15:24:09 +0000 UTC" firstStartedPulling="2026-02-19 15:24:10.312348771 +0000 UTC m=+879.794378895" lastFinishedPulling="2026-02-19 15:24:17.678429453 +0000 UTC m=+887.160459607" observedRunningTime="2026-02-19 15:24:18.702021691 +0000 UTC m=+888.184051815" watchObservedRunningTime="2026-02-19 15:24:18.707642259 +0000 UTC m=+888.189672423" Feb 19 15:24:19 crc kubenswrapper[4810]: I0219 15:24:19.701694 4810 generic.go:334] "Generic (PLEG): container finished" podID="66c7e596-ffa3-4687-8c80-21acecbd8075" containerID="02578ae2ec427a212d8c044ddf2bfaa326b5d3dd70c9a09659d1aee617f1535f" exitCode=0 Feb 19 15:24:19 crc kubenswrapper[4810]: I0219 15:24:19.701809 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7rbxk" event={"ID":"66c7e596-ffa3-4687-8c80-21acecbd8075","Type":"ContainerDied","Data":"02578ae2ec427a212d8c044ddf2bfaa326b5d3dd70c9a09659d1aee617f1535f"} Feb 19 15:24:20 crc kubenswrapper[4810]: I0219 15:24:20.710396 4810 generic.go:334] "Generic (PLEG): container finished" podID="66c7e596-ffa3-4687-8c80-21acecbd8075" containerID="8e693cf93437c9c05e8bdc88a4a36c64be3f787f3c70969abb84108fa8d23c9f" exitCode=0 Feb 19 15:24:20 crc kubenswrapper[4810]: I0219 15:24:20.710468 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7rbxk" event={"ID":"66c7e596-ffa3-4687-8c80-21acecbd8075","Type":"ContainerDied","Data":"8e693cf93437c9c05e8bdc88a4a36c64be3f787f3c70969abb84108fa8d23c9f"} Feb 19 15:24:21 crc kubenswrapper[4810]: I0219 15:24:21.469438 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-hllgd" Feb 19 15:24:21 crc kubenswrapper[4810]: I0219 15:24:21.734411 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7rbxk" event={"ID":"66c7e596-ffa3-4687-8c80-21acecbd8075","Type":"ContainerStarted","Data":"8b7e47e70a6e8055d8f542504afd337e8aa4c60b72d221db875feb5537eeca43"} Feb 19 15:24:21 crc kubenswrapper[4810]: I0219 15:24:21.734522 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7rbxk" event={"ID":"66c7e596-ffa3-4687-8c80-21acecbd8075","Type":"ContainerStarted","Data":"f901ccd9397f453fcd92de419c2489db750603670aca79d61f797ca4456f1425"} Feb 19 15:24:21 crc kubenswrapper[4810]: I0219 15:24:21.734535 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7rbxk" event={"ID":"66c7e596-ffa3-4687-8c80-21acecbd8075","Type":"ContainerStarted","Data":"a5242bdba01e77c89facdb5cf616d70b6ab21d75b87ff0c920dc2e5212136f48"} Feb 19 15:24:21 crc kubenswrapper[4810]: I0219 15:24:21.734546 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7rbxk" event={"ID":"66c7e596-ffa3-4687-8c80-21acecbd8075","Type":"ContainerStarted","Data":"5c507bc92e0aea14a09cbb57e68e961eaf45800b0d0ad346020542c85064b99d"} Feb 19 15:24:21 crc kubenswrapper[4810]: I0219 15:24:21.734558 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7rbxk" event={"ID":"66c7e596-ffa3-4687-8c80-21acecbd8075","Type":"ContainerStarted","Data":"ac46469a5393bc2843e4d6c82f3e6bc0722ade1cee9ec77f728700e026b8f654"} Feb 19 15:24:22 crc kubenswrapper[4810]: I0219 15:24:22.746955 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7rbxk" event={"ID":"66c7e596-ffa3-4687-8c80-21acecbd8075","Type":"ContainerStarted","Data":"d90869cb3c398041b0dfc6260521c79c55956aadb206f542e3a7393d75810e32"} Feb 19 15:24:22 crc kubenswrapper[4810]: I0219 15:24:22.747151 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:22 crc kubenswrapper[4810]: I0219 15:24:22.777074 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-7rbxk" podStartSLOduration=6.0981763 podStartE2EDuration="13.777043082s" podCreationTimestamp="2026-02-19 15:24:09 +0000 UTC" firstStartedPulling="2026-02-19 15:24:10.0339808 +0000 UTC m=+879.516010914" lastFinishedPulling="2026-02-19 15:24:17.712847572 +0000 UTC m=+887.194877696" observedRunningTime="2026-02-19 15:24:22.769974998 +0000 UTC m=+892.252005142" watchObservedRunningTime="2026-02-19 15:24:22.777043082 +0000 UTC m=+892.259073256" Feb 19 15:24:25 crc kubenswrapper[4810]: I0219 15:24:25.119586 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:25 crc kubenswrapper[4810]: I0219 15:24:25.176513 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:25 crc kubenswrapper[4810]: I0219 15:24:25.643420 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sk8xn"] Feb 19 15:24:25 crc kubenswrapper[4810]: I0219 15:24:25.644456 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sk8xn" Feb 19 15:24:25 crc kubenswrapper[4810]: I0219 15:24:25.647719 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-k54t5" Feb 19 15:24:25 crc kubenswrapper[4810]: I0219 15:24:25.653031 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 15:24:25 crc kubenswrapper[4810]: I0219 15:24:25.654124 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 15:24:25 crc kubenswrapper[4810]: I0219 15:24:25.665125 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sk8xn"] Feb 19 15:24:25 crc kubenswrapper[4810]: I0219 15:24:25.726308 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtc74\" (UniqueName: \"kubernetes.io/projected/41d0bc43-e85a-4f9f-afd4-084e6f44e4ce-kube-api-access-mtc74\") pod \"openstack-operator-index-sk8xn\" (UID: \"41d0bc43-e85a-4f9f-afd4-084e6f44e4ce\") " pod="openstack-operators/openstack-operator-index-sk8xn" Feb 19 15:24:25 crc kubenswrapper[4810]: I0219 15:24:25.827641 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtc74\" (UniqueName: \"kubernetes.io/projected/41d0bc43-e85a-4f9f-afd4-084e6f44e4ce-kube-api-access-mtc74\") pod \"openstack-operator-index-sk8xn\" (UID: \"41d0bc43-e85a-4f9f-afd4-084e6f44e4ce\") " pod="openstack-operators/openstack-operator-index-sk8xn" Feb 19 15:24:25 crc kubenswrapper[4810]: I0219 15:24:25.862216 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtc74\" (UniqueName: \"kubernetes.io/projected/41d0bc43-e85a-4f9f-afd4-084e6f44e4ce-kube-api-access-mtc74\") pod \"openstack-operator-index-sk8xn\" (UID: \"41d0bc43-e85a-4f9f-afd4-084e6f44e4ce\") " pod="openstack-operators/openstack-operator-index-sk8xn" Feb 19 15:24:25 crc kubenswrapper[4810]: I0219 15:24:25.964211 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sk8xn" Feb 19 15:24:26 crc kubenswrapper[4810]: I0219 15:24:26.397673 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sk8xn"] Feb 19 15:24:27 crc kubenswrapper[4810]: I0219 15:24:27.153556 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sk8xn" event={"ID":"41d0bc43-e85a-4f9f-afd4-084e6f44e4ce","Type":"ContainerStarted","Data":"2b0a57286b9b2353d026057f88088fed2faf2067e31ca739bd4323db681d6a21"} Feb 19 15:24:29 crc kubenswrapper[4810]: I0219 15:24:29.018871 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sk8xn"] Feb 19 15:24:29 crc kubenswrapper[4810]: I0219 15:24:29.180164 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sk8xn" event={"ID":"41d0bc43-e85a-4f9f-afd4-084e6f44e4ce","Type":"ContainerStarted","Data":"aa0ac433c527ef4ec3ac933184eb8fd41d99a848cd1d1d205b4fdfd2a61acede"} Feb 19 15:24:29 crc kubenswrapper[4810]: I0219 15:24:29.206818 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sk8xn" podStartSLOduration=1.881144001 podStartE2EDuration="4.20678453s" podCreationTimestamp="2026-02-19 15:24:25 +0000 UTC" firstStartedPulling="2026-02-19 15:24:26.411755884 +0000 UTC m=+895.893786048" lastFinishedPulling="2026-02-19 15:24:28.737396453 +0000 UTC m=+898.219426577" observedRunningTime="2026-02-19 15:24:29.201812297 +0000 UTC m=+898.683842451" watchObservedRunningTime="2026-02-19 15:24:29.20678453 +0000 UTC m=+898.688814694" Feb 19 15:24:29 crc kubenswrapper[4810]: I0219 15:24:29.629700 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gkft8"] Feb 19 15:24:29 crc kubenswrapper[4810]: I0219 15:24:29.631238 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gkft8" Feb 19 15:24:29 crc kubenswrapper[4810]: I0219 15:24:29.650899 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gkft8"] Feb 19 15:24:29 crc kubenswrapper[4810]: I0219 15:24:29.700257 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkjvv\" (UniqueName: \"kubernetes.io/projected/09f49ae7-b6fb-4ca5-9238-8bcf8d15ea95-kube-api-access-hkjvv\") pod \"openstack-operator-index-gkft8\" (UID: \"09f49ae7-b6fb-4ca5-9238-8bcf8d15ea95\") " pod="openstack-operators/openstack-operator-index-gkft8" Feb 19 15:24:29 crc kubenswrapper[4810]: I0219 15:24:29.801631 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkjvv\" (UniqueName: \"kubernetes.io/projected/09f49ae7-b6fb-4ca5-9238-8bcf8d15ea95-kube-api-access-hkjvv\") pod \"openstack-operator-index-gkft8\" (UID: \"09f49ae7-b6fb-4ca5-9238-8bcf8d15ea95\") " pod="openstack-operators/openstack-operator-index-gkft8" Feb 19 15:24:29 crc kubenswrapper[4810]: I0219 15:24:29.828160 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkjvv\" (UniqueName: \"kubernetes.io/projected/09f49ae7-b6fb-4ca5-9238-8bcf8d15ea95-kube-api-access-hkjvv\") pod \"openstack-operator-index-gkft8\" (UID: \"09f49ae7-b6fb-4ca5-9238-8bcf8d15ea95\") " pod="openstack-operators/openstack-operator-index-gkft8" Feb 19 15:24:29 crc kubenswrapper[4810]: I0219 15:24:29.889909 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-cwj24" Feb 19 15:24:29 crc kubenswrapper[4810]: I0219 15:24:29.958206 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gkft8" Feb 19 15:24:29 crc kubenswrapper[4810]: I0219 15:24:29.987390 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-jngcz" Feb 19 15:24:30 crc kubenswrapper[4810]: I0219 15:24:30.187043 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-sk8xn" podUID="41d0bc43-e85a-4f9f-afd4-084e6f44e4ce" containerName="registry-server" containerID="cri-o://aa0ac433c527ef4ec3ac933184eb8fd41d99a848cd1d1d205b4fdfd2a61acede" gracePeriod=2 Feb 19 15:24:30 crc kubenswrapper[4810]: I0219 15:24:30.256932 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gkft8"] Feb 19 15:24:30 crc kubenswrapper[4810]: W0219 15:24:30.279248 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09f49ae7_b6fb_4ca5_9238_8bcf8d15ea95.slice/crio-5c060bf458cdb24d082d26fcac41d5603a9a93c63bfecf77323ee460a9708b0a WatchSource:0}: Error finding container 5c060bf458cdb24d082d26fcac41d5603a9a93c63bfecf77323ee460a9708b0a: Status 404 returned error can't find the container with id 5c060bf458cdb24d082d26fcac41d5603a9a93c63bfecf77323ee460a9708b0a Feb 19 15:24:30 crc kubenswrapper[4810]: I0219 15:24:30.508444 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sk8xn" Feb 19 15:24:30 crc kubenswrapper[4810]: I0219 15:24:30.627846 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtc74\" (UniqueName: \"kubernetes.io/projected/41d0bc43-e85a-4f9f-afd4-084e6f44e4ce-kube-api-access-mtc74\") pod \"41d0bc43-e85a-4f9f-afd4-084e6f44e4ce\" (UID: \"41d0bc43-e85a-4f9f-afd4-084e6f44e4ce\") " Feb 19 15:24:30 crc kubenswrapper[4810]: I0219 15:24:30.634614 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d0bc43-e85a-4f9f-afd4-084e6f44e4ce-kube-api-access-mtc74" (OuterVolumeSpecName: "kube-api-access-mtc74") pod "41d0bc43-e85a-4f9f-afd4-084e6f44e4ce" (UID: "41d0bc43-e85a-4f9f-afd4-084e6f44e4ce"). InnerVolumeSpecName "kube-api-access-mtc74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:24:30 crc kubenswrapper[4810]: I0219 15:24:30.730055 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtc74\" (UniqueName: \"kubernetes.io/projected/41d0bc43-e85a-4f9f-afd4-084e6f44e4ce-kube-api-access-mtc74\") on node \"crc\" DevicePath \"\"" Feb 19 15:24:31 crc kubenswrapper[4810]: I0219 15:24:31.195603 4810 generic.go:334] "Generic (PLEG): container finished" podID="41d0bc43-e85a-4f9f-afd4-084e6f44e4ce" containerID="aa0ac433c527ef4ec3ac933184eb8fd41d99a848cd1d1d205b4fdfd2a61acede" exitCode=0 Feb 19 15:24:31 crc kubenswrapper[4810]: I0219 15:24:31.195713 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sk8xn" event={"ID":"41d0bc43-e85a-4f9f-afd4-084e6f44e4ce","Type":"ContainerDied","Data":"aa0ac433c527ef4ec3ac933184eb8fd41d99a848cd1d1d205b4fdfd2a61acede"} Feb 19 15:24:31 crc kubenswrapper[4810]: I0219 15:24:31.195725 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sk8xn" Feb 19 15:24:31 crc kubenswrapper[4810]: I0219 15:24:31.195756 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sk8xn" event={"ID":"41d0bc43-e85a-4f9f-afd4-084e6f44e4ce","Type":"ContainerDied","Data":"2b0a57286b9b2353d026057f88088fed2faf2067e31ca739bd4323db681d6a21"} Feb 19 15:24:31 crc kubenswrapper[4810]: I0219 15:24:31.195784 4810 scope.go:117] "RemoveContainer" containerID="aa0ac433c527ef4ec3ac933184eb8fd41d99a848cd1d1d205b4fdfd2a61acede" Feb 19 15:24:31 crc kubenswrapper[4810]: I0219 15:24:31.198085 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gkft8" event={"ID":"09f49ae7-b6fb-4ca5-9238-8bcf8d15ea95","Type":"ContainerStarted","Data":"7113f462f11087b05663de419743450115d9f73ef4aed7e53e46e383fc3d8299"} Feb 19 15:24:31 crc kubenswrapper[4810]: I0219 15:24:31.198154 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gkft8" event={"ID":"09f49ae7-b6fb-4ca5-9238-8bcf8d15ea95","Type":"ContainerStarted","Data":"5c060bf458cdb24d082d26fcac41d5603a9a93c63bfecf77323ee460a9708b0a"} Feb 19 15:24:31 crc kubenswrapper[4810]: I0219 15:24:31.215982 4810 scope.go:117] "RemoveContainer" containerID="aa0ac433c527ef4ec3ac933184eb8fd41d99a848cd1d1d205b4fdfd2a61acede" Feb 19 15:24:31 crc kubenswrapper[4810]: E0219 15:24:31.216603 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa0ac433c527ef4ec3ac933184eb8fd41d99a848cd1d1d205b4fdfd2a61acede\": container with ID starting with aa0ac433c527ef4ec3ac933184eb8fd41d99a848cd1d1d205b4fdfd2a61acede not found: ID does not exist" containerID="aa0ac433c527ef4ec3ac933184eb8fd41d99a848cd1d1d205b4fdfd2a61acede" Feb 19 15:24:31 crc kubenswrapper[4810]: I0219 15:24:31.216645 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0ac433c527ef4ec3ac933184eb8fd41d99a848cd1d1d205b4fdfd2a61acede"} err="failed to get container status \"aa0ac433c527ef4ec3ac933184eb8fd41d99a848cd1d1d205b4fdfd2a61acede\": rpc error: code = NotFound desc = could not find container \"aa0ac433c527ef4ec3ac933184eb8fd41d99a848cd1d1d205b4fdfd2a61acede\": container with ID starting with aa0ac433c527ef4ec3ac933184eb8fd41d99a848cd1d1d205b4fdfd2a61acede not found: ID does not exist" Feb 19 15:24:31 crc kubenswrapper[4810]: I0219 15:24:31.231103 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gkft8" podStartSLOduration=2.189969664 podStartE2EDuration="2.231080289s" podCreationTimestamp="2026-02-19 15:24:29 +0000 UTC" firstStartedPulling="2026-02-19 15:24:30.284131533 +0000 UTC m=+899.766161677" lastFinishedPulling="2026-02-19 15:24:30.325242188 +0000 UTC m=+899.807272302" observedRunningTime="2026-02-19 15:24:31.22990695 +0000 UTC m=+900.711937094" watchObservedRunningTime="2026-02-19 15:24:31.231080289 +0000 UTC m=+900.713110423" Feb 19 15:24:31 crc kubenswrapper[4810]: I0219 15:24:31.254459 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sk8xn"] Feb 19 15:24:31 crc kubenswrapper[4810]: I0219 15:24:31.260878 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-sk8xn"] Feb 19 15:24:31 crc kubenswrapper[4810]: I0219 15:24:31.463432 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d0bc43-e85a-4f9f-afd4-084e6f44e4ce" path="/var/lib/kubelet/pods/41d0bc43-e85a-4f9f-afd4-084e6f44e4ce/volumes" Feb 19 15:24:39 crc kubenswrapper[4810]: I0219 15:24:39.902981 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-7rbxk" Feb 19 15:24:39 crc kubenswrapper[4810]: I0219 15:24:39.958934 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-gkft8" Feb 19 15:24:39 crc kubenswrapper[4810]: I0219 15:24:39.958992 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-gkft8" Feb 19 15:24:40 crc kubenswrapper[4810]: I0219 15:24:40.004506 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-gkft8" Feb 19 15:24:40 crc kubenswrapper[4810]: I0219 15:24:40.294239 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-gkft8" Feb 19 15:24:45 crc kubenswrapper[4810]: I0219 15:24:45.883005 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4"] Feb 19 15:24:45 crc kubenswrapper[4810]: E0219 15:24:45.883594 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d0bc43-e85a-4f9f-afd4-084e6f44e4ce" containerName="registry-server" Feb 19 15:24:45 crc kubenswrapper[4810]: I0219 15:24:45.883610 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d0bc43-e85a-4f9f-afd4-084e6f44e4ce" containerName="registry-server" Feb 19 15:24:45 crc kubenswrapper[4810]: I0219 15:24:45.883745 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d0bc43-e85a-4f9f-afd4-084e6f44e4ce" containerName="registry-server" Feb 19 15:24:45 crc kubenswrapper[4810]: I0219 15:24:45.884732 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" Feb 19 15:24:45 crc kubenswrapper[4810]: I0219 15:24:45.886850 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-6rxxb" Feb 19 15:24:45 crc kubenswrapper[4810]: I0219 15:24:45.901142 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4"] Feb 19 15:24:46 crc kubenswrapper[4810]: I0219 15:24:46.047013 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqwln\" (UniqueName: \"kubernetes.io/projected/124e176a-b011-4a5c-8e7c-ca027d881aea-kube-api-access-dqwln\") pod \"ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4\" (UID: \"124e176a-b011-4a5c-8e7c-ca027d881aea\") " pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" Feb 19 15:24:46 crc kubenswrapper[4810]: I0219 15:24:46.047413 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/124e176a-b011-4a5c-8e7c-ca027d881aea-util\") pod \"ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4\" (UID: \"124e176a-b011-4a5c-8e7c-ca027d881aea\") " pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" Feb 19 15:24:46 crc kubenswrapper[4810]: I0219 15:24:46.047451 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/124e176a-b011-4a5c-8e7c-ca027d881aea-bundle\") pod \"ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4\" (UID: \"124e176a-b011-4a5c-8e7c-ca027d881aea\") " pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" Feb 19 15:24:46 crc kubenswrapper[4810]: I0219 15:24:46.148353 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqwln\" (UniqueName: \"kubernetes.io/projected/124e176a-b011-4a5c-8e7c-ca027d881aea-kube-api-access-dqwln\") pod \"ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4\" (UID: \"124e176a-b011-4a5c-8e7c-ca027d881aea\") " pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" Feb 19 15:24:46 crc kubenswrapper[4810]: I0219 15:24:46.148426 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/124e176a-b011-4a5c-8e7c-ca027d881aea-util\") pod \"ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4\" (UID: \"124e176a-b011-4a5c-8e7c-ca027d881aea\") " pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" Feb 19 15:24:46 crc kubenswrapper[4810]: I0219 15:24:46.148456 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/124e176a-b011-4a5c-8e7c-ca027d881aea-bundle\") pod \"ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4\" (UID: \"124e176a-b011-4a5c-8e7c-ca027d881aea\") " pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" Feb 19 15:24:46 crc kubenswrapper[4810]: I0219 15:24:46.148880 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/124e176a-b011-4a5c-8e7c-ca027d881aea-util\") pod \"ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4\" (UID: \"124e176a-b011-4a5c-8e7c-ca027d881aea\") " pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" Feb 19 15:24:46 crc kubenswrapper[4810]: I0219 15:24:46.148916 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/124e176a-b011-4a5c-8e7c-ca027d881aea-bundle\") pod \"ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4\" (UID: \"124e176a-b011-4a5c-8e7c-ca027d881aea\") " pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" Feb 19 15:24:46 crc kubenswrapper[4810]: I0219 15:24:46.166881 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqwln\" (UniqueName: \"kubernetes.io/projected/124e176a-b011-4a5c-8e7c-ca027d881aea-kube-api-access-dqwln\") pod \"ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4\" (UID: \"124e176a-b011-4a5c-8e7c-ca027d881aea\") " pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" Feb 19 15:24:46 crc kubenswrapper[4810]: I0219 15:24:46.213951 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" Feb 19 15:24:46 crc kubenswrapper[4810]: I0219 15:24:46.446764 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4"] Feb 19 15:24:46 crc kubenswrapper[4810]: W0219 15:24:46.454006 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod124e176a_b011_4a5c_8e7c_ca027d881aea.slice/crio-23f2682d37978e25a3571906c49f5160415d929f9c63b597b382a08e82f5f189 WatchSource:0}: Error finding container 23f2682d37978e25a3571906c49f5160415d929f9c63b597b382a08e82f5f189: Status 404 returned error can't find the container with id 23f2682d37978e25a3571906c49f5160415d929f9c63b597b382a08e82f5f189 Feb 19 15:24:47 crc kubenswrapper[4810]: I0219 15:24:47.322200 4810 generic.go:334] "Generic (PLEG): container finished" podID="124e176a-b011-4a5c-8e7c-ca027d881aea" containerID="246a4fb665235cc06f742344c48f787e5a561910911c86d2f9e860ab2e45da7a" exitCode=0 Feb 19 15:24:47 crc kubenswrapper[4810]: I0219 15:24:47.322269 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" event={"ID":"124e176a-b011-4a5c-8e7c-ca027d881aea","Type":"ContainerDied","Data":"246a4fb665235cc06f742344c48f787e5a561910911c86d2f9e860ab2e45da7a"} Feb 19 15:24:47 crc kubenswrapper[4810]: I0219 15:24:47.322372 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" event={"ID":"124e176a-b011-4a5c-8e7c-ca027d881aea","Type":"ContainerStarted","Data":"23f2682d37978e25a3571906c49f5160415d929f9c63b597b382a08e82f5f189"} Feb 19 15:24:48 crc kubenswrapper[4810]: I0219 15:24:48.334371 4810 generic.go:334] "Generic (PLEG): container finished" podID="124e176a-b011-4a5c-8e7c-ca027d881aea" containerID="5d864715a4378a444f94d842fa65a2ecb1f7da4d456dcd2ab74d325ed9f5a637" exitCode=0 Feb 19 15:24:48 crc kubenswrapper[4810]: I0219 15:24:48.334484 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" event={"ID":"124e176a-b011-4a5c-8e7c-ca027d881aea","Type":"ContainerDied","Data":"5d864715a4378a444f94d842fa65a2ecb1f7da4d456dcd2ab74d325ed9f5a637"} Feb 19 15:24:49 crc kubenswrapper[4810]: I0219 15:24:49.343622 4810 generic.go:334] "Generic (PLEG): container finished" podID="124e176a-b011-4a5c-8e7c-ca027d881aea" containerID="80d37b029c8542d0cb36ee35550147079788b1d39ccb5e077286ac29e2925fc2" exitCode=0 Feb 19 15:24:49 crc kubenswrapper[4810]: I0219 15:24:49.343697 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" event={"ID":"124e176a-b011-4a5c-8e7c-ca027d881aea","Type":"ContainerDied","Data":"80d37b029c8542d0cb36ee35550147079788b1d39ccb5e077286ac29e2925fc2"} Feb 19 15:24:49 crc kubenswrapper[4810]: I0219 15:24:49.537496 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:24:49 crc kubenswrapper[4810]: I0219 15:24:49.537618 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:24:50 crc kubenswrapper[4810]: I0219 15:24:50.615945 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" Feb 19 15:24:50 crc kubenswrapper[4810]: I0219 15:24:50.720077 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/124e176a-b011-4a5c-8e7c-ca027d881aea-util\") pod \"124e176a-b011-4a5c-8e7c-ca027d881aea\" (UID: \"124e176a-b011-4a5c-8e7c-ca027d881aea\") " Feb 19 15:24:50 crc kubenswrapper[4810]: I0219 15:24:50.720300 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/124e176a-b011-4a5c-8e7c-ca027d881aea-bundle\") pod \"124e176a-b011-4a5c-8e7c-ca027d881aea\" (UID: \"124e176a-b011-4a5c-8e7c-ca027d881aea\") " Feb 19 15:24:50 crc kubenswrapper[4810]: I0219 15:24:50.720429 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqwln\" (UniqueName: \"kubernetes.io/projected/124e176a-b011-4a5c-8e7c-ca027d881aea-kube-api-access-dqwln\") pod \"124e176a-b011-4a5c-8e7c-ca027d881aea\" (UID: \"124e176a-b011-4a5c-8e7c-ca027d881aea\") " Feb 19 15:24:50 crc kubenswrapper[4810]: I0219 15:24:50.722007 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124e176a-b011-4a5c-8e7c-ca027d881aea-bundle" (OuterVolumeSpecName: "bundle") pod "124e176a-b011-4a5c-8e7c-ca027d881aea" (UID: "124e176a-b011-4a5c-8e7c-ca027d881aea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:24:50 crc kubenswrapper[4810]: I0219 15:24:50.727154 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124e176a-b011-4a5c-8e7c-ca027d881aea-kube-api-access-dqwln" (OuterVolumeSpecName: "kube-api-access-dqwln") pod "124e176a-b011-4a5c-8e7c-ca027d881aea" (UID: "124e176a-b011-4a5c-8e7c-ca027d881aea"). InnerVolumeSpecName "kube-api-access-dqwln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:24:50 crc kubenswrapper[4810]: I0219 15:24:50.734398 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124e176a-b011-4a5c-8e7c-ca027d881aea-util" (OuterVolumeSpecName: "util") pod "124e176a-b011-4a5c-8e7c-ca027d881aea" (UID: "124e176a-b011-4a5c-8e7c-ca027d881aea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:24:50 crc kubenswrapper[4810]: I0219 15:24:50.822016 4810 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/124e176a-b011-4a5c-8e7c-ca027d881aea-util\") on node \"crc\" DevicePath \"\"" Feb 19 15:24:50 crc kubenswrapper[4810]: I0219 15:24:50.822055 4810 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/124e176a-b011-4a5c-8e7c-ca027d881aea-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:24:50 crc kubenswrapper[4810]: I0219 15:24:50.822068 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqwln\" (UniqueName: \"kubernetes.io/projected/124e176a-b011-4a5c-8e7c-ca027d881aea-kube-api-access-dqwln\") on node \"crc\" DevicePath \"\"" Feb 19 15:24:51 crc kubenswrapper[4810]: I0219 15:24:51.358863 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" event={"ID":"124e176a-b011-4a5c-8e7c-ca027d881aea","Type":"ContainerDied","Data":"23f2682d37978e25a3571906c49f5160415d929f9c63b597b382a08e82f5f189"} Feb 19 15:24:51 crc kubenswrapper[4810]: I0219 15:24:51.358917 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23f2682d37978e25a3571906c49f5160415d929f9c63b597b382a08e82f5f189" Feb 19 15:24:51 crc kubenswrapper[4810]: I0219 15:24:51.359007 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.101901 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t5pmn"] Feb 19 15:24:54 crc kubenswrapper[4810]: E0219 15:24:54.102545 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e176a-b011-4a5c-8e7c-ca027d881aea" containerName="extract" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.102560 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e176a-b011-4a5c-8e7c-ca027d881aea" containerName="extract" Feb 19 15:24:54 crc kubenswrapper[4810]: E0219 15:24:54.102581 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e176a-b011-4a5c-8e7c-ca027d881aea" containerName="util" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.102589 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e176a-b011-4a5c-8e7c-ca027d881aea" containerName="util" Feb 19 15:24:54 crc kubenswrapper[4810]: E0219 15:24:54.102602 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e176a-b011-4a5c-8e7c-ca027d881aea" containerName="pull" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.102611 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e176a-b011-4a5c-8e7c-ca027d881aea" containerName="pull" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.102763 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="124e176a-b011-4a5c-8e7c-ca027d881aea" containerName="extract" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.103833 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.132063 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t5pmn"] Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.169915 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8f1642a-52bc-4509-848c-535f0c43fe54-catalog-content\") pod \"community-operators-t5pmn\" (UID: \"f8f1642a-52bc-4509-848c-535f0c43fe54\") " pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.169965 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg7nt\" (UniqueName: \"kubernetes.io/projected/f8f1642a-52bc-4509-848c-535f0c43fe54-kube-api-access-cg7nt\") pod \"community-operators-t5pmn\" (UID: \"f8f1642a-52bc-4509-848c-535f0c43fe54\") " pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.169982 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8f1642a-52bc-4509-848c-535f0c43fe54-utilities\") pod \"community-operators-t5pmn\" (UID: \"f8f1642a-52bc-4509-848c-535f0c43fe54\") " pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.271293 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8f1642a-52bc-4509-848c-535f0c43fe54-catalog-content\") pod \"community-operators-t5pmn\" (UID: \"f8f1642a-52bc-4509-848c-535f0c43fe54\") " pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.271365 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg7nt\" (UniqueName: \"kubernetes.io/projected/f8f1642a-52bc-4509-848c-535f0c43fe54-kube-api-access-cg7nt\") pod \"community-operators-t5pmn\" (UID: \"f8f1642a-52bc-4509-848c-535f0c43fe54\") " pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.271390 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8f1642a-52bc-4509-848c-535f0c43fe54-utilities\") pod \"community-operators-t5pmn\" (UID: \"f8f1642a-52bc-4509-848c-535f0c43fe54\") " pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.271861 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8f1642a-52bc-4509-848c-535f0c43fe54-catalog-content\") pod \"community-operators-t5pmn\" (UID: \"f8f1642a-52bc-4509-848c-535f0c43fe54\") " pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.271944 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8f1642a-52bc-4509-848c-535f0c43fe54-utilities\") pod \"community-operators-t5pmn\" (UID: \"f8f1642a-52bc-4509-848c-535f0c43fe54\") " pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.291250 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg7nt\" (UniqueName: \"kubernetes.io/projected/f8f1642a-52bc-4509-848c-535f0c43fe54-kube-api-access-cg7nt\") pod \"community-operators-t5pmn\" (UID: \"f8f1642a-52bc-4509-848c-535f0c43fe54\") " pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.425776 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:24:54 crc kubenswrapper[4810]: I0219 15:24:54.991996 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t5pmn"] Feb 19 15:24:55 crc kubenswrapper[4810]: I0219 15:24:55.389505 4810 generic.go:334] "Generic (PLEG): container finished" podID="f8f1642a-52bc-4509-848c-535f0c43fe54" containerID="0ed7b8321208c45e27b494e78f57e89db21fc20dcc81a8179837a2d07cef9bfe" exitCode=0 Feb 19 15:24:55 crc kubenswrapper[4810]: I0219 15:24:55.389586 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5pmn" event={"ID":"f8f1642a-52bc-4509-848c-535f0c43fe54","Type":"ContainerDied","Data":"0ed7b8321208c45e27b494e78f57e89db21fc20dcc81a8179837a2d07cef9bfe"} Feb 19 15:24:55 crc kubenswrapper[4810]: I0219 15:24:55.389792 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5pmn" event={"ID":"f8f1642a-52bc-4509-848c-535f0c43fe54","Type":"ContainerStarted","Data":"6a664415a78d036bc3a1f839d48dd33d629dec25c9501508f9c47654d086e230"} Feb 19 15:24:56 crc kubenswrapper[4810]: I0219 15:24:56.399304 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5pmn" event={"ID":"f8f1642a-52bc-4509-848c-535f0c43fe54","Type":"ContainerStarted","Data":"0d14aab1918c80c70f97695d04823e0580412af35dfeacbd48568eb0ad351bd0"} Feb 19 15:24:57 crc kubenswrapper[4810]: I0219 15:24:57.411813 4810 generic.go:334] "Generic (PLEG): container finished" podID="f8f1642a-52bc-4509-848c-535f0c43fe54" containerID="0d14aab1918c80c70f97695d04823e0580412af35dfeacbd48568eb0ad351bd0" exitCode=0 Feb 19 15:24:57 crc kubenswrapper[4810]: I0219 15:24:57.412160 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5pmn" event={"ID":"f8f1642a-52bc-4509-848c-535f0c43fe54","Type":"ContainerDied","Data":"0d14aab1918c80c70f97695d04823e0580412af35dfeacbd48568eb0ad351bd0"} Feb 19 15:24:58 crc kubenswrapper[4810]: I0219 15:24:58.420667 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5pmn" event={"ID":"f8f1642a-52bc-4509-848c-535f0c43fe54","Type":"ContainerStarted","Data":"0b637e868543b6fea247931f0367f092f18093bc152aef733eaab5e8c6c8fce9"} Feb 19 15:24:58 crc kubenswrapper[4810]: I0219 15:24:58.444487 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t5pmn" podStartSLOduration=2.033547001 podStartE2EDuration="4.444465787s" podCreationTimestamp="2026-02-19 15:24:54 +0000 UTC" firstStartedPulling="2026-02-19 15:24:55.391576956 +0000 UTC m=+924.873607100" lastFinishedPulling="2026-02-19 15:24:57.802495762 +0000 UTC m=+927.284525886" observedRunningTime="2026-02-19 15:24:58.440035209 +0000 UTC m=+927.922065323" watchObservedRunningTime="2026-02-19 15:24:58.444465787 +0000 UTC m=+927.926495911" Feb 19 15:24:59 crc kubenswrapper[4810]: I0219 15:24:59.651224 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-69cffcd4f6-27gzn"] Feb 19 15:24:59 crc kubenswrapper[4810]: I0219 15:24:59.652190 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-69cffcd4f6-27gzn" Feb 19 15:24:59 crc kubenswrapper[4810]: I0219 15:24:59.653920 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-gj24b" Feb 19 15:24:59 crc kubenswrapper[4810]: I0219 15:24:59.681047 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-69cffcd4f6-27gzn"] Feb 19 15:24:59 crc kubenswrapper[4810]: I0219 15:24:59.744041 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg8tb\" (UniqueName: \"kubernetes.io/projected/e84ef702-2f13-42e9-ae2b-6f1465b67ff3-kube-api-access-cg8tb\") pod \"openstack-operator-controller-init-69cffcd4f6-27gzn\" (UID: \"e84ef702-2f13-42e9-ae2b-6f1465b67ff3\") " pod="openstack-operators/openstack-operator-controller-init-69cffcd4f6-27gzn" Feb 19 15:24:59 crc kubenswrapper[4810]: I0219 15:24:59.845020 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg8tb\" (UniqueName: \"kubernetes.io/projected/e84ef702-2f13-42e9-ae2b-6f1465b67ff3-kube-api-access-cg8tb\") pod \"openstack-operator-controller-init-69cffcd4f6-27gzn\" (UID: \"e84ef702-2f13-42e9-ae2b-6f1465b67ff3\") " pod="openstack-operators/openstack-operator-controller-init-69cffcd4f6-27gzn" Feb 19 15:24:59 crc kubenswrapper[4810]: I0219 15:24:59.871485 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg8tb\" (UniqueName: \"kubernetes.io/projected/e84ef702-2f13-42e9-ae2b-6f1465b67ff3-kube-api-access-cg8tb\") pod \"openstack-operator-controller-init-69cffcd4f6-27gzn\" (UID: \"e84ef702-2f13-42e9-ae2b-6f1465b67ff3\") " pod="openstack-operators/openstack-operator-controller-init-69cffcd4f6-27gzn" Feb 19 15:24:59 crc kubenswrapper[4810]: I0219 15:24:59.968226 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-69cffcd4f6-27gzn" Feb 19 15:25:00 crc kubenswrapper[4810]: I0219 15:25:00.185755 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-69cffcd4f6-27gzn"] Feb 19 15:25:00 crc kubenswrapper[4810]: W0219 15:25:00.207478 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode84ef702_2f13_42e9_ae2b_6f1465b67ff3.slice/crio-8c78845001e58a7b5adde23d36fe56666961710f2b5b9ec9c1d0202ab328f250 WatchSource:0}: Error finding container 8c78845001e58a7b5adde23d36fe56666961710f2b5b9ec9c1d0202ab328f250: Status 404 returned error can't find the container with id 8c78845001e58a7b5adde23d36fe56666961710f2b5b9ec9c1d0202ab328f250 Feb 19 15:25:00 crc kubenswrapper[4810]: I0219 15:25:00.434808 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-69cffcd4f6-27gzn" event={"ID":"e84ef702-2f13-42e9-ae2b-6f1465b67ff3","Type":"ContainerStarted","Data":"8c78845001e58a7b5adde23d36fe56666961710f2b5b9ec9c1d0202ab328f250"} Feb 19 15:25:04 crc kubenswrapper[4810]: I0219 15:25:04.426475 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:25:04 crc kubenswrapper[4810]: I0219 15:25:04.426957 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:25:04 crc kubenswrapper[4810]: I0219 15:25:04.470117 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-69cffcd4f6-27gzn" event={"ID":"e84ef702-2f13-42e9-ae2b-6f1465b67ff3","Type":"ContainerStarted","Data":"405b7168a897224c7eaa9f3a8924210364683d51893c54f80ba7221e170cde3b"} Feb 19 15:25:04 crc kubenswrapper[4810]: I0219 15:25:04.470492 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-69cffcd4f6-27gzn" Feb 19 15:25:04 crc kubenswrapper[4810]: I0219 15:25:04.474301 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:25:04 crc kubenswrapper[4810]: I0219 15:25:04.512060 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-69cffcd4f6-27gzn" podStartSLOduration=1.7884214520000001 podStartE2EDuration="5.512042603s" podCreationTimestamp="2026-02-19 15:24:59 +0000 UTC" firstStartedPulling="2026-02-19 15:25:00.211620353 +0000 UTC m=+929.693650477" lastFinishedPulling="2026-02-19 15:25:03.935241504 +0000 UTC m=+933.417271628" observedRunningTime="2026-02-19 15:25:04.509530861 +0000 UTC m=+933.991560985" watchObservedRunningTime="2026-02-19 15:25:04.512042603 +0000 UTC m=+933.994072727" Feb 19 15:25:04 crc kubenswrapper[4810]: I0219 15:25:04.524380 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:25:06 crc kubenswrapper[4810]: I0219 15:25:06.467269 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t5pmn"] Feb 19 15:25:06 crc kubenswrapper[4810]: I0219 15:25:06.483096 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t5pmn" podUID="f8f1642a-52bc-4509-848c-535f0c43fe54" containerName="registry-server" containerID="cri-o://0b637e868543b6fea247931f0367f092f18093bc152aef733eaab5e8c6c8fce9" gracePeriod=2 Feb 19 15:25:06 crc kubenswrapper[4810]: I0219 15:25:06.857274 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:25:06 crc kubenswrapper[4810]: I0219 15:25:06.956384 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8f1642a-52bc-4509-848c-535f0c43fe54-utilities\") pod \"f8f1642a-52bc-4509-848c-535f0c43fe54\" (UID: \"f8f1642a-52bc-4509-848c-535f0c43fe54\") " Feb 19 15:25:06 crc kubenswrapper[4810]: I0219 15:25:06.956966 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg7nt\" (UniqueName: \"kubernetes.io/projected/f8f1642a-52bc-4509-848c-535f0c43fe54-kube-api-access-cg7nt\") pod \"f8f1642a-52bc-4509-848c-535f0c43fe54\" (UID: \"f8f1642a-52bc-4509-848c-535f0c43fe54\") " Feb 19 15:25:06 crc kubenswrapper[4810]: I0219 15:25:06.957026 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8f1642a-52bc-4509-848c-535f0c43fe54-catalog-content\") pod \"f8f1642a-52bc-4509-848c-535f0c43fe54\" (UID: \"f8f1642a-52bc-4509-848c-535f0c43fe54\") " Feb 19 15:25:06 crc kubenswrapper[4810]: I0219 15:25:06.958056 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8f1642a-52bc-4509-848c-535f0c43fe54-utilities" (OuterVolumeSpecName: "utilities") pod "f8f1642a-52bc-4509-848c-535f0c43fe54" (UID: "f8f1642a-52bc-4509-848c-535f0c43fe54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:25:06 crc kubenswrapper[4810]: I0219 15:25:06.963020 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f1642a-52bc-4509-848c-535f0c43fe54-kube-api-access-cg7nt" (OuterVolumeSpecName: "kube-api-access-cg7nt") pod "f8f1642a-52bc-4509-848c-535f0c43fe54" (UID: "f8f1642a-52bc-4509-848c-535f0c43fe54"). InnerVolumeSpecName "kube-api-access-cg7nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.068063 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8f1642a-52bc-4509-848c-535f0c43fe54-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.068104 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg7nt\" (UniqueName: \"kubernetes.io/projected/f8f1642a-52bc-4509-848c-535f0c43fe54-kube-api-access-cg7nt\") on node \"crc\" DevicePath \"\"" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.097965 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8f1642a-52bc-4509-848c-535f0c43fe54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8f1642a-52bc-4509-848c-535f0c43fe54" (UID: "f8f1642a-52bc-4509-848c-535f0c43fe54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.169843 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8f1642a-52bc-4509-848c-535f0c43fe54-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.493227 4810 generic.go:334] "Generic (PLEG): container finished" podID="f8f1642a-52bc-4509-848c-535f0c43fe54" containerID="0b637e868543b6fea247931f0367f092f18093bc152aef733eaab5e8c6c8fce9" exitCode=0 Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.493267 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5pmn" event={"ID":"f8f1642a-52bc-4509-848c-535f0c43fe54","Type":"ContainerDied","Data":"0b637e868543b6fea247931f0367f092f18093bc152aef733eaab5e8c6c8fce9"} Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.493292 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5pmn" event={"ID":"f8f1642a-52bc-4509-848c-535f0c43fe54","Type":"ContainerDied","Data":"6a664415a78d036bc3a1f839d48dd33d629dec25c9501508f9c47654d086e230"} Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.493309 4810 scope.go:117] "RemoveContainer" containerID="0b637e868543b6fea247931f0367f092f18093bc152aef733eaab5e8c6c8fce9" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.493447 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5pmn" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.511887 4810 scope.go:117] "RemoveContainer" containerID="0d14aab1918c80c70f97695d04823e0580412af35dfeacbd48568eb0ad351bd0" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.513133 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t5pmn"] Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.516830 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t5pmn"] Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.532489 4810 scope.go:117] "RemoveContainer" containerID="0ed7b8321208c45e27b494e78f57e89db21fc20dcc81a8179837a2d07cef9bfe" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.549538 4810 scope.go:117] "RemoveContainer" containerID="0b637e868543b6fea247931f0367f092f18093bc152aef733eaab5e8c6c8fce9" Feb 19 15:25:07 crc kubenswrapper[4810]: E0219 15:25:07.552911 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b637e868543b6fea247931f0367f092f18093bc152aef733eaab5e8c6c8fce9\": container with ID starting with 0b637e868543b6fea247931f0367f092f18093bc152aef733eaab5e8c6c8fce9 not found: ID does not exist" containerID="0b637e868543b6fea247931f0367f092f18093bc152aef733eaab5e8c6c8fce9" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.552990 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b637e868543b6fea247931f0367f092f18093bc152aef733eaab5e8c6c8fce9"} err="failed to get container status \"0b637e868543b6fea247931f0367f092f18093bc152aef733eaab5e8c6c8fce9\": rpc error: code = NotFound desc = could not find container \"0b637e868543b6fea247931f0367f092f18093bc152aef733eaab5e8c6c8fce9\": container with ID starting with 0b637e868543b6fea247931f0367f092f18093bc152aef733eaab5e8c6c8fce9 not found: ID does not exist" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.553024 4810 scope.go:117] "RemoveContainer" containerID="0d14aab1918c80c70f97695d04823e0580412af35dfeacbd48568eb0ad351bd0" Feb 19 15:25:07 crc kubenswrapper[4810]: E0219 15:25:07.553497 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d14aab1918c80c70f97695d04823e0580412af35dfeacbd48568eb0ad351bd0\": container with ID starting with 0d14aab1918c80c70f97695d04823e0580412af35dfeacbd48568eb0ad351bd0 not found: ID does not exist" containerID="0d14aab1918c80c70f97695d04823e0580412af35dfeacbd48568eb0ad351bd0" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.553538 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d14aab1918c80c70f97695d04823e0580412af35dfeacbd48568eb0ad351bd0"} err="failed to get container status \"0d14aab1918c80c70f97695d04823e0580412af35dfeacbd48568eb0ad351bd0\": rpc error: code = NotFound desc = could not find container \"0d14aab1918c80c70f97695d04823e0580412af35dfeacbd48568eb0ad351bd0\": container with ID starting with 0d14aab1918c80c70f97695d04823e0580412af35dfeacbd48568eb0ad351bd0 not found: ID does not exist" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.553566 4810 scope.go:117] "RemoveContainer" containerID="0ed7b8321208c45e27b494e78f57e89db21fc20dcc81a8179837a2d07cef9bfe" Feb 19 15:25:07 crc kubenswrapper[4810]: E0219 15:25:07.553913 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed7b8321208c45e27b494e78f57e89db21fc20dcc81a8179837a2d07cef9bfe\": container with ID starting with 0ed7b8321208c45e27b494e78f57e89db21fc20dcc81a8179837a2d07cef9bfe not found: ID does not exist" containerID="0ed7b8321208c45e27b494e78f57e89db21fc20dcc81a8179837a2d07cef9bfe" Feb 19 15:25:07 crc kubenswrapper[4810]: I0219 15:25:07.553949 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed7b8321208c45e27b494e78f57e89db21fc20dcc81a8179837a2d07cef9bfe"} err="failed to get container status \"0ed7b8321208c45e27b494e78f57e89db21fc20dcc81a8179837a2d07cef9bfe\": rpc error: code = NotFound desc = could not find container \"0ed7b8321208c45e27b494e78f57e89db21fc20dcc81a8179837a2d07cef9bfe\": container with ID starting with 0ed7b8321208c45e27b494e78f57e89db21fc20dcc81a8179837a2d07cef9bfe not found: ID does not exist" Feb 19 15:25:09 crc kubenswrapper[4810]: I0219 15:25:09.453228 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8f1642a-52bc-4509-848c-535f0c43fe54" path="/var/lib/kubelet/pods/f8f1642a-52bc-4509-848c-535f0c43fe54/volumes" Feb 19 15:25:09 crc kubenswrapper[4810]: I0219 15:25:09.972526 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-69cffcd4f6-27gzn" Feb 19 15:25:19 crc kubenswrapper[4810]: I0219 15:25:19.537909 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:25:19 crc kubenswrapper[4810]: I0219 15:25:19.538675 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.547999 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p4pb4"] Feb 19 15:25:26 crc kubenswrapper[4810]: E0219 15:25:26.548950 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f1642a-52bc-4509-848c-535f0c43fe54" containerName="extract-content" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.548972 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f1642a-52bc-4509-848c-535f0c43fe54" containerName="extract-content" Feb 19 15:25:26 crc kubenswrapper[4810]: E0219 15:25:26.548998 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f1642a-52bc-4509-848c-535f0c43fe54" containerName="registry-server" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.549010 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f1642a-52bc-4509-848c-535f0c43fe54" containerName="registry-server" Feb 19 15:25:26 crc kubenswrapper[4810]: E0219 15:25:26.549026 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f1642a-52bc-4509-848c-535f0c43fe54" containerName="extract-utilities" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.549038 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f1642a-52bc-4509-848c-535f0c43fe54" containerName="extract-utilities" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.549190 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f1642a-52bc-4509-848c-535f0c43fe54" containerName="registry-server" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.550317 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.568852 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4pb4"] Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.660399 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8a9167-c157-4d9a-830b-03a91af714f1-utilities\") pod \"certified-operators-p4pb4\" (UID: \"1e8a9167-c157-4d9a-830b-03a91af714f1\") " pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.660513 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8a9167-c157-4d9a-830b-03a91af714f1-catalog-content\") pod \"certified-operators-p4pb4\" (UID: \"1e8a9167-c157-4d9a-830b-03a91af714f1\") " pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.660660 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7zvl\" (UniqueName: \"kubernetes.io/projected/1e8a9167-c157-4d9a-830b-03a91af714f1-kube-api-access-s7zvl\") pod \"certified-operators-p4pb4\" (UID: \"1e8a9167-c157-4d9a-830b-03a91af714f1\") " pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.762132 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7zvl\" (UniqueName: \"kubernetes.io/projected/1e8a9167-c157-4d9a-830b-03a91af714f1-kube-api-access-s7zvl\") pod \"certified-operators-p4pb4\" (UID: \"1e8a9167-c157-4d9a-830b-03a91af714f1\") " pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.762205 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8a9167-c157-4d9a-830b-03a91af714f1-utilities\") pod \"certified-operators-p4pb4\" (UID: \"1e8a9167-c157-4d9a-830b-03a91af714f1\") " pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.762234 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8a9167-c157-4d9a-830b-03a91af714f1-catalog-content\") pod \"certified-operators-p4pb4\" (UID: \"1e8a9167-c157-4d9a-830b-03a91af714f1\") " pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.762693 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8a9167-c157-4d9a-830b-03a91af714f1-catalog-content\") pod \"certified-operators-p4pb4\" (UID: \"1e8a9167-c157-4d9a-830b-03a91af714f1\") " pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.762899 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8a9167-c157-4d9a-830b-03a91af714f1-utilities\") pod \"certified-operators-p4pb4\" (UID: \"1e8a9167-c157-4d9a-830b-03a91af714f1\") " pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.794008 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7zvl\" (UniqueName: \"kubernetes.io/projected/1e8a9167-c157-4d9a-830b-03a91af714f1-kube-api-access-s7zvl\") pod \"certified-operators-p4pb4\" (UID: \"1e8a9167-c157-4d9a-830b-03a91af714f1\") " pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:26 crc kubenswrapper[4810]: I0219 15:25:26.888082 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:27 crc kubenswrapper[4810]: I0219 15:25:27.381881 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4pb4"] Feb 19 15:25:27 crc kubenswrapper[4810]: W0219 15:25:27.395133 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e8a9167_c157_4d9a_830b_03a91af714f1.slice/crio-ae1a311e8a2e5f58cb4ca3edef21cfd17981ec0f6f89bf4d92c64bc3468bfa5e WatchSource:0}: Error finding container ae1a311e8a2e5f58cb4ca3edef21cfd17981ec0f6f89bf4d92c64bc3468bfa5e: Status 404 returned error can't find the container with id ae1a311e8a2e5f58cb4ca3edef21cfd17981ec0f6f89bf4d92c64bc3468bfa5e Feb 19 15:25:27 crc kubenswrapper[4810]: I0219 15:25:27.673334 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e8a9167-c157-4d9a-830b-03a91af714f1" containerID="48a49f38fa3ffbe1677b0b403f3aaf11cb5caa40978378460bf04941a8180ec9" exitCode=0 Feb 19 15:25:27 crc kubenswrapper[4810]: I0219 15:25:27.673545 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4pb4" event={"ID":"1e8a9167-c157-4d9a-830b-03a91af714f1","Type":"ContainerDied","Data":"48a49f38fa3ffbe1677b0b403f3aaf11cb5caa40978378460bf04941a8180ec9"} Feb 19 15:25:27 crc kubenswrapper[4810]: I0219 15:25:27.673602 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4pb4" event={"ID":"1e8a9167-c157-4d9a-830b-03a91af714f1","Type":"ContainerStarted","Data":"ae1a311e8a2e5f58cb4ca3edef21cfd17981ec0f6f89bf4d92c64bc3468bfa5e"} Feb 19 15:25:29 crc kubenswrapper[4810]: I0219 15:25:29.692385 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e8a9167-c157-4d9a-830b-03a91af714f1" containerID="d9c156f23ea8a27c3e29c7dc448b51e5084a296bac6eb2789880f5b577811e2c" exitCode=0 Feb 19 15:25:29 crc kubenswrapper[4810]: I0219 15:25:29.692521 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4pb4" event={"ID":"1e8a9167-c157-4d9a-830b-03a91af714f1","Type":"ContainerDied","Data":"d9c156f23ea8a27c3e29c7dc448b51e5084a296bac6eb2789880f5b577811e2c"} Feb 19 15:25:30 crc kubenswrapper[4810]: I0219 15:25:30.701119 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4pb4" event={"ID":"1e8a9167-c157-4d9a-830b-03a91af714f1","Type":"ContainerStarted","Data":"4f9ebb887d3b59f725d8e7b6ac586d52978a5345af6ed853a8bce7fd37049223"} Feb 19 15:25:30 crc kubenswrapper[4810]: I0219 15:25:30.718200 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p4pb4" podStartSLOduration=2.091366798 podStartE2EDuration="4.718185405s" podCreationTimestamp="2026-02-19 15:25:26 +0000 UTC" firstStartedPulling="2026-02-19 15:25:27.674609713 +0000 UTC m=+957.156639827" lastFinishedPulling="2026-02-19 15:25:30.3014283 +0000 UTC m=+959.783458434" observedRunningTime="2026-02-19 15:25:30.714057914 +0000 UTC m=+960.196088038" watchObservedRunningTime="2026-02-19 15:25:30.718185405 +0000 UTC m=+960.200215529" Feb 19 15:25:36 crc kubenswrapper[4810]: I0219 15:25:36.889171 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:36 crc kubenswrapper[4810]: I0219 15:25:36.889781 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:36 crc kubenswrapper[4810]: I0219 15:25:36.944788 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:37 crc kubenswrapper[4810]: I0219 15:25:37.072278 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:37 crc kubenswrapper[4810]: I0219 15:25:37.175688 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p4pb4"] Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.038234 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p4pb4" podUID="1e8a9167-c157-4d9a-830b-03a91af714f1" containerName="registry-server" containerID="cri-o://4f9ebb887d3b59f725d8e7b6ac586d52978a5345af6ed853a8bce7fd37049223" gracePeriod=2 Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.586071 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gx7qf"] Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.588188 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.600042 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gx7qf"] Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.604682 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpjhx\" (UniqueName: \"kubernetes.io/projected/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-kube-api-access-tpjhx\") pod \"redhat-marketplace-gx7qf\" (UID: \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\") " pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.604751 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-catalog-content\") pod \"redhat-marketplace-gx7qf\" (UID: \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\") " pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.604842 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-utilities\") pod \"redhat-marketplace-gx7qf\" (UID: \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\") " pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.705821 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpjhx\" (UniqueName: \"kubernetes.io/projected/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-kube-api-access-tpjhx\") pod \"redhat-marketplace-gx7qf\" (UID: \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\") " pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.706176 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-catalog-content\") pod \"redhat-marketplace-gx7qf\" (UID: \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\") " pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.706236 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-utilities\") pod \"redhat-marketplace-gx7qf\" (UID: \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\") " pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.706881 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-utilities\") pod \"redhat-marketplace-gx7qf\" (UID: \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\") " pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.706955 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-catalog-content\") pod \"redhat-marketplace-gx7qf\" (UID: \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\") " pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.726829 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpjhx\" (UniqueName: \"kubernetes.io/projected/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-kube-api-access-tpjhx\") pod \"redhat-marketplace-gx7qf\" (UID: \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\") " pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.921090 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:39 crc kubenswrapper[4810]: I0219 15:25:39.999669 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.052091 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e8a9167-c157-4d9a-830b-03a91af714f1" containerID="4f9ebb887d3b59f725d8e7b6ac586d52978a5345af6ed853a8bce7fd37049223" exitCode=0 Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.052434 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4pb4" event={"ID":"1e8a9167-c157-4d9a-830b-03a91af714f1","Type":"ContainerDied","Data":"4f9ebb887d3b59f725d8e7b6ac586d52978a5345af6ed853a8bce7fd37049223"} Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.052461 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4pb4" event={"ID":"1e8a9167-c157-4d9a-830b-03a91af714f1","Type":"ContainerDied","Data":"ae1a311e8a2e5f58cb4ca3edef21cfd17981ec0f6f89bf4d92c64bc3468bfa5e"} Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.052478 4810 scope.go:117] "RemoveContainer" containerID="4f9ebb887d3b59f725d8e7b6ac586d52978a5345af6ed853a8bce7fd37049223" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.052588 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4pb4" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.108649 4810 scope.go:117] "RemoveContainer" containerID="d9c156f23ea8a27c3e29c7dc448b51e5084a296bac6eb2789880f5b577811e2c" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.111054 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8a9167-c157-4d9a-830b-03a91af714f1-catalog-content\") pod \"1e8a9167-c157-4d9a-830b-03a91af714f1\" (UID: \"1e8a9167-c157-4d9a-830b-03a91af714f1\") " Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.111128 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7zvl\" (UniqueName: \"kubernetes.io/projected/1e8a9167-c157-4d9a-830b-03a91af714f1-kube-api-access-s7zvl\") pod \"1e8a9167-c157-4d9a-830b-03a91af714f1\" (UID: \"1e8a9167-c157-4d9a-830b-03a91af714f1\") " Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.111213 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8a9167-c157-4d9a-830b-03a91af714f1-utilities\") pod \"1e8a9167-c157-4d9a-830b-03a91af714f1\" (UID: \"1e8a9167-c157-4d9a-830b-03a91af714f1\") " Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.112114 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e8a9167-c157-4d9a-830b-03a91af714f1-utilities" (OuterVolumeSpecName: "utilities") pod "1e8a9167-c157-4d9a-830b-03a91af714f1" (UID: "1e8a9167-c157-4d9a-830b-03a91af714f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.122764 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e8a9167-c157-4d9a-830b-03a91af714f1-kube-api-access-s7zvl" (OuterVolumeSpecName: "kube-api-access-s7zvl") pod "1e8a9167-c157-4d9a-830b-03a91af714f1" (UID: "1e8a9167-c157-4d9a-830b-03a91af714f1"). InnerVolumeSpecName "kube-api-access-s7zvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.151292 4810 scope.go:117] "RemoveContainer" containerID="48a49f38fa3ffbe1677b0b403f3aaf11cb5caa40978378460bf04941a8180ec9" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.213562 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8a9167-c157-4d9a-830b-03a91af714f1-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.213593 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7zvl\" (UniqueName: \"kubernetes.io/projected/1e8a9167-c157-4d9a-830b-03a91af714f1-kube-api-access-s7zvl\") on node \"crc\" DevicePath \"\"" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.229499 4810 scope.go:117] "RemoveContainer" containerID="4f9ebb887d3b59f725d8e7b6ac586d52978a5345af6ed853a8bce7fd37049223" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.231489 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e8a9167-c157-4d9a-830b-03a91af714f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e8a9167-c157-4d9a-830b-03a91af714f1" (UID: "1e8a9167-c157-4d9a-830b-03a91af714f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:25:40 crc kubenswrapper[4810]: E0219 15:25:40.233698 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f9ebb887d3b59f725d8e7b6ac586d52978a5345af6ed853a8bce7fd37049223\": container with ID starting with 4f9ebb887d3b59f725d8e7b6ac586d52978a5345af6ed853a8bce7fd37049223 not found: ID does not exist" containerID="4f9ebb887d3b59f725d8e7b6ac586d52978a5345af6ed853a8bce7fd37049223" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.233741 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f9ebb887d3b59f725d8e7b6ac586d52978a5345af6ed853a8bce7fd37049223"} err="failed to get container status \"4f9ebb887d3b59f725d8e7b6ac586d52978a5345af6ed853a8bce7fd37049223\": rpc error: code = NotFound desc = could not find container \"4f9ebb887d3b59f725d8e7b6ac586d52978a5345af6ed853a8bce7fd37049223\": container with ID starting with 4f9ebb887d3b59f725d8e7b6ac586d52978a5345af6ed853a8bce7fd37049223 not found: ID does not exist" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.233767 4810 scope.go:117] "RemoveContainer" containerID="d9c156f23ea8a27c3e29c7dc448b51e5084a296bac6eb2789880f5b577811e2c" Feb 19 15:25:40 crc kubenswrapper[4810]: E0219 15:25:40.234860 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9c156f23ea8a27c3e29c7dc448b51e5084a296bac6eb2789880f5b577811e2c\": container with ID starting with d9c156f23ea8a27c3e29c7dc448b51e5084a296bac6eb2789880f5b577811e2c not found: ID does not exist" containerID="d9c156f23ea8a27c3e29c7dc448b51e5084a296bac6eb2789880f5b577811e2c" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.234887 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c156f23ea8a27c3e29c7dc448b51e5084a296bac6eb2789880f5b577811e2c"} err="failed to get container status \"d9c156f23ea8a27c3e29c7dc448b51e5084a296bac6eb2789880f5b577811e2c\": rpc error: code = NotFound desc = could not find container \"d9c156f23ea8a27c3e29c7dc448b51e5084a296bac6eb2789880f5b577811e2c\": container with ID starting with d9c156f23ea8a27c3e29c7dc448b51e5084a296bac6eb2789880f5b577811e2c not found: ID does not exist" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.234906 4810 scope.go:117] "RemoveContainer" containerID="48a49f38fa3ffbe1677b0b403f3aaf11cb5caa40978378460bf04941a8180ec9" Feb 19 15:25:40 crc kubenswrapper[4810]: E0219 15:25:40.237806 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a49f38fa3ffbe1677b0b403f3aaf11cb5caa40978378460bf04941a8180ec9\": container with ID starting with 48a49f38fa3ffbe1677b0b403f3aaf11cb5caa40978378460bf04941a8180ec9 not found: ID does not exist" containerID="48a49f38fa3ffbe1677b0b403f3aaf11cb5caa40978378460bf04941a8180ec9" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.237838 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a49f38fa3ffbe1677b0b403f3aaf11cb5caa40978378460bf04941a8180ec9"} err="failed to get container status \"48a49f38fa3ffbe1677b0b403f3aaf11cb5caa40978378460bf04941a8180ec9\": rpc error: code = NotFound desc = could not find container \"48a49f38fa3ffbe1677b0b403f3aaf11cb5caa40978378460bf04941a8180ec9\": container with ID starting with 48a49f38fa3ffbe1677b0b403f3aaf11cb5caa40978378460bf04941a8180ec9 not found: ID does not exist" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.315112 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8a9167-c157-4d9a-830b-03a91af714f1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.378215 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p4pb4"] Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.389018 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p4pb4"] Feb 19 15:25:40 crc kubenswrapper[4810]: I0219 15:25:40.459920 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gx7qf"] Feb 19 15:25:41 crc kubenswrapper[4810]: I0219 15:25:41.061772 4810 generic.go:334] "Generic (PLEG): container finished" podID="1715cc96-a86a-40d8-8f5a-1a4f35129bd1" containerID="1d5f88ecf4c81e410f0df90ab60b4889433c0df9a3a9c3d46a0ad0dad5a5c6f9" exitCode=0 Feb 19 15:25:41 crc kubenswrapper[4810]: I0219 15:25:41.061893 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gx7qf" event={"ID":"1715cc96-a86a-40d8-8f5a-1a4f35129bd1","Type":"ContainerDied","Data":"1d5f88ecf4c81e410f0df90ab60b4889433c0df9a3a9c3d46a0ad0dad5a5c6f9"} Feb 19 15:25:41 crc kubenswrapper[4810]: I0219 15:25:41.062061 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gx7qf" event={"ID":"1715cc96-a86a-40d8-8f5a-1a4f35129bd1","Type":"ContainerStarted","Data":"c9c6c8774d7ca4a97936ace4183e794c37a4dbb46f2dcfbc1cbf73f1bab26b9b"} Feb 19 15:25:41 crc kubenswrapper[4810]: I0219 15:25:41.446438 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e8a9167-c157-4d9a-830b-03a91af714f1" path="/var/lib/kubelet/pods/1e8a9167-c157-4d9a-830b-03a91af714f1/volumes" Feb 19 15:25:42 crc kubenswrapper[4810]: I0219 15:25:42.068292 4810 generic.go:334] "Generic (PLEG): container finished" podID="1715cc96-a86a-40d8-8f5a-1a4f35129bd1" containerID="a21841d08c3392c62d7d0123a3228f5e73d4d173558fa34eefb59a0035628f39" exitCode=0 Feb 19 15:25:42 crc kubenswrapper[4810]: I0219 15:25:42.068449 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gx7qf" event={"ID":"1715cc96-a86a-40d8-8f5a-1a4f35129bd1","Type":"ContainerDied","Data":"a21841d08c3392c62d7d0123a3228f5e73d4d173558fa34eefb59a0035628f39"} Feb 19 15:25:43 crc kubenswrapper[4810]: I0219 15:25:43.075949 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gx7qf" event={"ID":"1715cc96-a86a-40d8-8f5a-1a4f35129bd1","Type":"ContainerStarted","Data":"5cab9d89ebbb4715343c797d52130d009456c8d3d9eaf887c80836933b581c07"} Feb 19 15:25:43 crc kubenswrapper[4810]: I0219 15:25:43.099467 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gx7qf" podStartSLOduration=2.728977246 podStartE2EDuration="4.099452367s" podCreationTimestamp="2026-02-19 15:25:39 +0000 UTC" firstStartedPulling="2026-02-19 15:25:41.063034852 +0000 UTC m=+970.545064976" lastFinishedPulling="2026-02-19 15:25:42.433509973 +0000 UTC m=+971.915540097" observedRunningTime="2026-02-19 15:25:43.096599547 +0000 UTC m=+972.578629671" watchObservedRunningTime="2026-02-19 15:25:43.099452367 +0000 UTC m=+972.581482491" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.039625 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-mzslt"] Feb 19 15:25:48 crc kubenswrapper[4810]: E0219 15:25:48.040433 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8a9167-c157-4d9a-830b-03a91af714f1" containerName="registry-server" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.040445 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8a9167-c157-4d9a-830b-03a91af714f1" containerName="registry-server" Feb 19 15:25:48 crc kubenswrapper[4810]: E0219 15:25:48.040461 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8a9167-c157-4d9a-830b-03a91af714f1" containerName="extract-content" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.040502 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8a9167-c157-4d9a-830b-03a91af714f1" containerName="extract-content" Feb 19 15:25:48 crc kubenswrapper[4810]: E0219 15:25:48.040514 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8a9167-c157-4d9a-830b-03a91af714f1" containerName="extract-utilities" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.040521 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8a9167-c157-4d9a-830b-03a91af714f1" containerName="extract-utilities" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.040663 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e8a9167-c157-4d9a-830b-03a91af714f1" containerName="registry-server" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.041160 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-mzslt" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.043960 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-l2x68" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.057262 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-mzslt"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.094543 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-jxmt5"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.095719 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jxmt5" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.099839 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-z5fb9"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.101841 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xzqhm" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.110272 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-qz68t"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.111002 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qz68t" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.111480 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-z5fb9" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.118087 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-jxmt5"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.118691 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-6p6vd" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.127744 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-j2fdc" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.147946 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-z5fb9"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.181400 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-qz68t"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.189850 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-ffm66"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.190682 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ffm66" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.198480 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-xkwtq" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.206925 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gnmlp"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.208746 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gnmlp" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.216023 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-7g47h" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.216318 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzxjv\" (UniqueName: \"kubernetes.io/projected/1217b757-0f1c-4c4e-9abe-55875992915d-kube-api-access-vzxjv\") pod \"cinder-operator-controller-manager-5d946d989d-jxmt5\" (UID: \"1217b757-0f1c-4c4e-9abe-55875992915d\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jxmt5" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.216396 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5rr\" (UniqueName: \"kubernetes.io/projected/2106e7b5-bb83-464a-a43f-943f22b55078-kube-api-access-dk5rr\") pod \"glance-operator-controller-manager-77987464f4-qz68t\" (UID: \"2106e7b5-bb83-464a-a43f-943f22b55078\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-qz68t" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.216424 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpks5\" (UniqueName: \"kubernetes.io/projected/52bb990c-eff0-4673-be27-d55d433bef0d-kube-api-access-kpks5\") pod \"designate-operator-controller-manager-6d8bf5c495-z5fb9\" (UID: \"52bb990c-eff0-4673-be27-d55d433bef0d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-z5fb9" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.216440 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq5dr\" (UniqueName: \"kubernetes.io/projected/91002269-9fe0-44d2-9dbd-9e4cf58274bf-kube-api-access-rq5dr\") pod \"barbican-operator-controller-manager-868647ff47-mzslt\" (UID: \"91002269-9fe0-44d2-9dbd-9e4cf58274bf\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-mzslt" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.223939 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-ffm66"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.238525 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gnmlp"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.266825 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.271881 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-nnps5"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.274202 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-nnps5" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.274865 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.281393 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-br7n2" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.295416 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.298596 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-n2jcn" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.319629 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-nnps5"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.319942 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fljv7\" (UniqueName: \"kubernetes.io/projected/602535d1-0abe-471e-8409-31319af7bd4b-kube-api-access-fljv7\") pod \"ironic-operator-controller-manager-554564d7fc-nnps5\" (UID: \"602535d1-0abe-471e-8409-31319af7bd4b\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-nnps5" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.320003 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk5rr\" (UniqueName: \"kubernetes.io/projected/2106e7b5-bb83-464a-a43f-943f22b55078-kube-api-access-dk5rr\") pod \"glance-operator-controller-manager-77987464f4-qz68t\" (UID: \"2106e7b5-bb83-464a-a43f-943f22b55078\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-qz68t" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.320029 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpks5\" (UniqueName: \"kubernetes.io/projected/52bb990c-eff0-4673-be27-d55d433bef0d-kube-api-access-kpks5\") pod \"designate-operator-controller-manager-6d8bf5c495-z5fb9\" (UID: \"52bb990c-eff0-4673-be27-d55d433bef0d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-z5fb9" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.320046 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq5dr\" (UniqueName: \"kubernetes.io/projected/91002269-9fe0-44d2-9dbd-9e4cf58274bf-kube-api-access-rq5dr\") pod \"barbican-operator-controller-manager-868647ff47-mzslt\" (UID: \"91002269-9fe0-44d2-9dbd-9e4cf58274bf\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-mzslt" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.320087 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert\") pod \"infra-operator-controller-manager-79d975b745-2kkhl\" (UID: \"4898d4eb-d474-44bc-9a38-e36f300d132f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.320109 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdkd6\" (UniqueName: \"kubernetes.io/projected/f0ab3643-d267-4902-af1f-cbcbdd7e5e41-kube-api-access-mdkd6\") pod \"horizon-operator-controller-manager-5b9b8895d5-gnmlp\" (UID: \"f0ab3643-d267-4902-af1f-cbcbdd7e5e41\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gnmlp" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.320130 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjqxl\" (UniqueName: \"kubernetes.io/projected/e2942952-ce19-4053-91da-05623c954167-kube-api-access-pjqxl\") pod \"heat-operator-controller-manager-69f49c598c-ffm66\" (UID: \"e2942952-ce19-4053-91da-05623c954167\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ffm66" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.320154 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzxjv\" (UniqueName: \"kubernetes.io/projected/1217b757-0f1c-4c4e-9abe-55875992915d-kube-api-access-vzxjv\") pod \"cinder-operator-controller-manager-5d946d989d-jxmt5\" (UID: \"1217b757-0f1c-4c4e-9abe-55875992915d\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jxmt5" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.320171 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8smk\" (UniqueName: \"kubernetes.io/projected/4898d4eb-d474-44bc-9a38-e36f300d132f-kube-api-access-s8smk\") pod \"infra-operator-controller-manager-79d975b745-2kkhl\" (UID: \"4898d4eb-d474-44bc-9a38-e36f300d132f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.326362 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.333731 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.334731 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.337561 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.361307 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-vc7cw"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.362209 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vc7cw" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.368367 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rg8mp" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.381038 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-kz77r" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.389485 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzxjv\" (UniqueName: \"kubernetes.io/projected/1217b757-0f1c-4c4e-9abe-55875992915d-kube-api-access-vzxjv\") pod \"cinder-operator-controller-manager-5d946d989d-jxmt5\" (UID: \"1217b757-0f1c-4c4e-9abe-55875992915d\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jxmt5" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.392415 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-vc7cw"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.392778 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq5dr\" (UniqueName: \"kubernetes.io/projected/91002269-9fe0-44d2-9dbd-9e4cf58274bf-kube-api-access-rq5dr\") pod \"barbican-operator-controller-manager-868647ff47-mzslt\" (UID: \"91002269-9fe0-44d2-9dbd-9e4cf58274bf\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-mzslt" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.396824 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk5rr\" (UniqueName: \"kubernetes.io/projected/2106e7b5-bb83-464a-a43f-943f22b55078-kube-api-access-dk5rr\") pod \"glance-operator-controller-manager-77987464f4-qz68t\" (UID: \"2106e7b5-bb83-464a-a43f-943f22b55078\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-qz68t" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.405004 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpks5\" (UniqueName: \"kubernetes.io/projected/52bb990c-eff0-4673-be27-d55d433bef0d-kube-api-access-kpks5\") pod \"designate-operator-controller-manager-6d8bf5c495-z5fb9\" (UID: \"52bb990c-eff0-4673-be27-d55d433bef0d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-z5fb9" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.421490 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert\") pod \"infra-operator-controller-manager-79d975b745-2kkhl\" (UID: \"4898d4eb-d474-44bc-9a38-e36f300d132f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.421553 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdkd6\" (UniqueName: \"kubernetes.io/projected/f0ab3643-d267-4902-af1f-cbcbdd7e5e41-kube-api-access-mdkd6\") pod \"horizon-operator-controller-manager-5b9b8895d5-gnmlp\" (UID: \"f0ab3643-d267-4902-af1f-cbcbdd7e5e41\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gnmlp" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.421596 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjqxl\" (UniqueName: \"kubernetes.io/projected/e2942952-ce19-4053-91da-05623c954167-kube-api-access-pjqxl\") pod \"heat-operator-controller-manager-69f49c598c-ffm66\" (UID: \"e2942952-ce19-4053-91da-05623c954167\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ffm66" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.421627 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8smk\" (UniqueName: \"kubernetes.io/projected/4898d4eb-d474-44bc-9a38-e36f300d132f-kube-api-access-s8smk\") pod \"infra-operator-controller-manager-79d975b745-2kkhl\" (UID: \"4898d4eb-d474-44bc-9a38-e36f300d132f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.421653 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fljv7\" (UniqueName: \"kubernetes.io/projected/602535d1-0abe-471e-8409-31319af7bd4b-kube-api-access-fljv7\") pod \"ironic-operator-controller-manager-554564d7fc-nnps5\" (UID: \"602535d1-0abe-471e-8409-31319af7bd4b\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-nnps5" Feb 19 15:25:48 crc kubenswrapper[4810]: E0219 15:25:48.422088 4810 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 15:25:48 crc kubenswrapper[4810]: E0219 15:25:48.422141 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert podName:4898d4eb-d474-44bc-9a38-e36f300d132f nodeName:}" failed. No retries permitted until 2026-02-19 15:25:48.922124904 +0000 UTC m=+978.404155028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert") pod "infra-operator-controller-manager-79d975b745-2kkhl" (UID: "4898d4eb-d474-44bc-9a38-e36f300d132f") : secret "infra-operator-webhook-server-cert" not found Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.428599 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jxmt5" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.450560 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.451378 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.460376 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.461273 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.467914 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-z5fb9" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.474161 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-hbjzp" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.474402 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-tfd89" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.492913 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdkd6\" (UniqueName: \"kubernetes.io/projected/f0ab3643-d267-4902-af1f-cbcbdd7e5e41-kube-api-access-mdkd6\") pod \"horizon-operator-controller-manager-5b9b8895d5-gnmlp\" (UID: \"f0ab3643-d267-4902-af1f-cbcbdd7e5e41\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gnmlp" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.493423 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8smk\" (UniqueName: \"kubernetes.io/projected/4898d4eb-d474-44bc-9a38-e36f300d132f-kube-api-access-s8smk\") pod \"infra-operator-controller-manager-79d975b745-2kkhl\" (UID: \"4898d4eb-d474-44bc-9a38-e36f300d132f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.510080 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjqxl\" (UniqueName: \"kubernetes.io/projected/e2942952-ce19-4053-91da-05623c954167-kube-api-access-pjqxl\") pod \"heat-operator-controller-manager-69f49c598c-ffm66\" (UID: \"e2942952-ce19-4053-91da-05623c954167\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ffm66" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.510511 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qz68t" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.514958 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fljv7\" (UniqueName: \"kubernetes.io/projected/602535d1-0abe-471e-8409-31319af7bd4b-kube-api-access-fljv7\") pod \"ironic-operator-controller-manager-554564d7fc-nnps5\" (UID: \"602535d1-0abe-471e-8409-31319af7bd4b\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-nnps5" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.515020 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.525640 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ffm66" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.526663 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qslf9\" (UniqueName: \"kubernetes.io/projected/2126b31b-0444-43e4-a250-837f37d476aa-kube-api-access-qslf9\") pod \"keystone-operator-controller-manager-b4d948c87-mkfsc\" (UID: \"2126b31b-0444-43e4-a250-837f37d476aa\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.526704 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnncf\" (UniqueName: \"kubernetes.io/projected/942f40af-0498-4865-99da-bdcd068ef449-kube-api-access-jnncf\") pod \"manila-operator-controller-manager-54f6768c69-vc7cw\" (UID: \"942f40af-0498-4865-99da-bdcd068ef449\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vc7cw" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.542620 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gnmlp" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.598816 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-nnps5" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.628215 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.647982 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.649625 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.661856 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-p2f6x" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.662855 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2j7z\" (UniqueName: \"kubernetes.io/projected/e4a54646-39cf-4e42-9367-487ea4f7d8a4-kube-api-access-j2j7z\") pod \"mariadb-operator-controller-manager-6994f66f48-6fqmd\" (UID: \"e4a54646-39cf-4e42-9367-487ea4f7d8a4\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.662907 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj652\" (UniqueName: \"kubernetes.io/projected/fef4ad67-ccc8-4b32-bfb8-38dd6aa8e07e-kube-api-access-jj652\") pod \"nova-operator-controller-manager-567668f5cf-l67cq\" (UID: \"fef4ad67-ccc8-4b32-bfb8-38dd6aa8e07e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.663026 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qslf9\" (UniqueName: \"kubernetes.io/projected/2126b31b-0444-43e4-a250-837f37d476aa-kube-api-access-qslf9\") pod \"keystone-operator-controller-manager-b4d948c87-mkfsc\" (UID: \"2126b31b-0444-43e4-a250-837f37d476aa\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.664368 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnncf\" (UniqueName: \"kubernetes.io/projected/942f40af-0498-4865-99da-bdcd068ef449-kube-api-access-jnncf\") pod \"manila-operator-controller-manager-54f6768c69-vc7cw\" (UID: \"942f40af-0498-4865-99da-bdcd068ef449\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vc7cw" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.689919 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-vcbwg"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.720770 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-vcbwg" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.724833 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-6ft99" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.726778 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qslf9\" (UniqueName: \"kubernetes.io/projected/2126b31b-0444-43e4-a250-837f37d476aa-kube-api-access-qslf9\") pod \"keystone-operator-controller-manager-b4d948c87-mkfsc\" (UID: \"2126b31b-0444-43e4-a250-837f37d476aa\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.727007 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-mzslt" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.729806 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnncf\" (UniqueName: \"kubernetes.io/projected/942f40af-0498-4865-99da-bdcd068ef449-kube-api-access-jnncf\") pod \"manila-operator-controller-manager-54f6768c69-vc7cw\" (UID: \"942f40af-0498-4865-99da-bdcd068ef449\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vc7cw" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.751258 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vc7cw" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.765600 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj652\" (UniqueName: \"kubernetes.io/projected/fef4ad67-ccc8-4b32-bfb8-38dd6aa8e07e-kube-api-access-jj652\") pod \"nova-operator-controller-manager-567668f5cf-l67cq\" (UID: \"fef4ad67-ccc8-4b32-bfb8-38dd6aa8e07e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.765664 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9lgb\" (UniqueName: \"kubernetes.io/projected/3cff7e0c-86c7-4029-aefe-a7f7e8e2d76d-kube-api-access-w9lgb\") pod \"octavia-operator-controller-manager-69f8888797-vcbwg\" (UID: \"3cff7e0c-86c7-4029-aefe-a7f7e8e2d76d\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-vcbwg" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.765690 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drb5r\" (UniqueName: \"kubernetes.io/projected/4aeb7cba-c1db-4dd6-92f7-dae7bd2e3f65-kube-api-access-drb5r\") pod \"neutron-operator-controller-manager-64ddbf8bb-jjqv2\" (UID: \"4aeb7cba-c1db-4dd6-92f7-dae7bd2e3f65\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.765769 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2j7z\" (UniqueName: \"kubernetes.io/projected/e4a54646-39cf-4e42-9367-487ea4f7d8a4-kube-api-access-j2j7z\") pod \"mariadb-operator-controller-manager-6994f66f48-6fqmd\" (UID: \"e4a54646-39cf-4e42-9367-487ea4f7d8a4\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.772181 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.780897 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.781753 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.791490 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj652\" (UniqueName: \"kubernetes.io/projected/fef4ad67-ccc8-4b32-bfb8-38dd6aa8e07e-kube-api-access-jj652\") pod \"nova-operator-controller-manager-567668f5cf-l67cq\" (UID: \"fef4ad67-ccc8-4b32-bfb8-38dd6aa8e07e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.791607 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.791923 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6b8w8" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.792741 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.796889 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.797062 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2whfc" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.807197 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-vcbwg"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.819202 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2j7z\" (UniqueName: \"kubernetes.io/projected/e4a54646-39cf-4e42-9367-487ea4f7d8a4-kube-api-access-j2j7z\") pod \"mariadb-operator-controller-manager-6994f66f48-6fqmd\" (UID: \"e4a54646-39cf-4e42-9367-487ea4f7d8a4\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.819640 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.830037 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.830918 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.830994 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.840870 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-xkq2q" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.841031 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.863807 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-t44nb"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.864608 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-t44nb" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.867188 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt\" (UID: \"c677bdd0-7248-4b02-9ab4-035c034a976a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.867231 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmsf2\" (UniqueName: \"kubernetes.io/projected/e163eac0-ea1f-4002-9469-844240d7a44c-kube-api-access-dmsf2\") pod \"ovn-operator-controller-manager-d44cf6b75-5xnwd\" (UID: \"e163eac0-ea1f-4002-9469-844240d7a44c\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.867270 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwc4k\" (UniqueName: \"kubernetes.io/projected/9f5779a5-4cda-40dc-831d-950f97eae317-kube-api-access-bwc4k\") pod \"placement-operator-controller-manager-8497b45c89-7tzvr\" (UID: \"9f5779a5-4cda-40dc-831d-950f97eae317\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.867302 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9lgb\" (UniqueName: \"kubernetes.io/projected/3cff7e0c-86c7-4029-aefe-a7f7e8e2d76d-kube-api-access-w9lgb\") pod \"octavia-operator-controller-manager-69f8888797-vcbwg\" (UID: \"3cff7e0c-86c7-4029-aefe-a7f7e8e2d76d\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-vcbwg" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.867345 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drb5r\" (UniqueName: \"kubernetes.io/projected/4aeb7cba-c1db-4dd6-92f7-dae7bd2e3f65-kube-api-access-drb5r\") pod \"neutron-operator-controller-manager-64ddbf8bb-jjqv2\" (UID: \"4aeb7cba-c1db-4dd6-92f7-dae7bd2e3f65\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.867381 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vf8f\" (UniqueName: \"kubernetes.io/projected/c677bdd0-7248-4b02-9ab4-035c034a976a-kube-api-access-9vf8f\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt\" (UID: \"c677bdd0-7248-4b02-9ab4-035c034a976a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.872881 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-6m778" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.889308 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-t44nb"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.890916 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9lgb\" (UniqueName: \"kubernetes.io/projected/3cff7e0c-86c7-4029-aefe-a7f7e8e2d76d-kube-api-access-w9lgb\") pod \"octavia-operator-controller-manager-69f8888797-vcbwg\" (UID: \"3cff7e0c-86c7-4029-aefe-a7f7e8e2d76d\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-vcbwg" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.897270 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.899217 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-px9zx"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.900377 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-px9zx" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.903556 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-shnl6" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.915680 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.923154 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drb5r\" (UniqueName: \"kubernetes.io/projected/4aeb7cba-c1db-4dd6-92f7-dae7bd2e3f65-kube-api-access-drb5r\") pod \"neutron-operator-controller-manager-64ddbf8bb-jjqv2\" (UID: \"4aeb7cba-c1db-4dd6-92f7-dae7bd2e3f65\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.923215 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-px9zx"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.942531 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-pw9kt"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.943955 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-pw9kt" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.949352 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-jlkc4" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.952551 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.953448 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.955387 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.959158 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mvh4x" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.962957 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-pw9kt"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.969738 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt\" (UID: \"c677bdd0-7248-4b02-9ab4-035c034a976a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.969794 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmsf2\" (UniqueName: \"kubernetes.io/projected/e163eac0-ea1f-4002-9469-844240d7a44c-kube-api-access-dmsf2\") pod \"ovn-operator-controller-manager-d44cf6b75-5xnwd\" (UID: \"e163eac0-ea1f-4002-9469-844240d7a44c\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.969833 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vbnk\" (UniqueName: \"kubernetes.io/projected/9c5af548-c722-4e6b-9309-1420838257e0-kube-api-access-9vbnk\") pod \"watcher-operator-controller-manager-798847869b-dlmvg\" (UID: \"9c5af548-c722-4e6b-9309-1420838257e0\") " pod="openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.969860 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jvv8\" (UniqueName: \"kubernetes.io/projected/69b7e96d-bce6-4653-998e-3bf5d159ae5a-kube-api-access-4jvv8\") pod \"test-operator-controller-manager-7866795846-pw9kt\" (UID: \"69b7e96d-bce6-4653-998e-3bf5d159ae5a\") " pod="openstack-operators/test-operator-controller-manager-7866795846-pw9kt" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.969878 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwc4k\" (UniqueName: \"kubernetes.io/projected/9f5779a5-4cda-40dc-831d-950f97eae317-kube-api-access-bwc4k\") pod \"placement-operator-controller-manager-8497b45c89-7tzvr\" (UID: \"9f5779a5-4cda-40dc-831d-950f97eae317\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.969913 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8pz6\" (UniqueName: \"kubernetes.io/projected/eaed166e-39b5-45ca-8a65-a22710d5fe37-kube-api-access-m8pz6\") pod \"telemetry-operator-controller-manager-7f45b4ff68-px9zx\" (UID: \"eaed166e-39b5-45ca-8a65-a22710d5fe37\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-px9zx" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.969946 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vf8f\" (UniqueName: \"kubernetes.io/projected/c677bdd0-7248-4b02-9ab4-035c034a976a-kube-api-access-9vf8f\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt\" (UID: \"c677bdd0-7248-4b02-9ab4-035c034a976a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:25:48 crc kubenswrapper[4810]: E0219 15:25:48.971101 4810 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 15:25:48 crc kubenswrapper[4810]: E0219 15:25:48.971178 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert podName:c677bdd0-7248-4b02-9ab4-035c034a976a nodeName:}" failed. No retries permitted until 2026-02-19 15:25:49.471158101 +0000 UTC m=+978.953188225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" (UID: "c677bdd0-7248-4b02-9ab4-035c034a976a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.971340 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert\") pod \"infra-operator-controller-manager-79d975b745-2kkhl\" (UID: \"4898d4eb-d474-44bc-9a38-e36f300d132f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.971383 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpnwv\" (UniqueName: \"kubernetes.io/projected/aa5063d7-2358-4149-a3b9-ef2ce138faf4-kube-api-access-bpnwv\") pod \"swift-operator-controller-manager-68f46476f-t44nb\" (UID: \"aa5063d7-2358-4149-a3b9-ef2ce138faf4\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-t44nb" Feb 19 15:25:48 crc kubenswrapper[4810]: E0219 15:25:48.971690 4810 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 15:25:48 crc kubenswrapper[4810]: E0219 15:25:48.971716 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert podName:4898d4eb-d474-44bc-9a38-e36f300d132f nodeName:}" failed. No retries permitted until 2026-02-19 15:25:49.971707394 +0000 UTC m=+979.453737518 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert") pod "infra-operator-controller-manager-79d975b745-2kkhl" (UID: "4898d4eb-d474-44bc-9a38-e36f300d132f") : secret "infra-operator-webhook-server-cert" not found Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.978604 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg"] Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.993813 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2" Feb 19 15:25:48 crc kubenswrapper[4810]: I0219 15:25:48.999403 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc"] Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.000290 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.005135 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.005556 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-rwc2r" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.005675 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.012025 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwc4k\" (UniqueName: \"kubernetes.io/projected/9f5779a5-4cda-40dc-831d-950f97eae317-kube-api-access-bwc4k\") pod \"placement-operator-controller-manager-8497b45c89-7tzvr\" (UID: \"9f5779a5-4cda-40dc-831d-950f97eae317\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.014561 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc"] Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.022568 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmsf2\" (UniqueName: \"kubernetes.io/projected/e163eac0-ea1f-4002-9469-844240d7a44c-kube-api-access-dmsf2\") pod \"ovn-operator-controller-manager-d44cf6b75-5xnwd\" (UID: \"e163eac0-ea1f-4002-9469-844240d7a44c\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.025531 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vf8f\" (UniqueName: \"kubernetes.io/projected/c677bdd0-7248-4b02-9ab4-035c034a976a-kube-api-access-9vf8f\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt\" (UID: \"c677bdd0-7248-4b02-9ab4-035c034a976a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.039377 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8"] Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.040405 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.042735 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-62cjp" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.044214 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8"] Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.058978 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-vcbwg" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.075789 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vbnk\" (UniqueName: \"kubernetes.io/projected/9c5af548-c722-4e6b-9309-1420838257e0-kube-api-access-9vbnk\") pod \"watcher-operator-controller-manager-798847869b-dlmvg\" (UID: \"9c5af548-c722-4e6b-9309-1420838257e0\") " pod="openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.075834 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.075875 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jvv8\" (UniqueName: \"kubernetes.io/projected/69b7e96d-bce6-4653-998e-3bf5d159ae5a-kube-api-access-4jvv8\") pod \"test-operator-controller-manager-7866795846-pw9kt\" (UID: \"69b7e96d-bce6-4653-998e-3bf5d159ae5a\") " pod="openstack-operators/test-operator-controller-manager-7866795846-pw9kt" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.075901 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.075992 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8pz6\" (UniqueName: \"kubernetes.io/projected/eaed166e-39b5-45ca-8a65-a22710d5fe37-kube-api-access-m8pz6\") pod \"telemetry-operator-controller-manager-7f45b4ff68-px9zx\" (UID: \"eaed166e-39b5-45ca-8a65-a22710d5fe37\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-px9zx" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.076066 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5xfs\" (UniqueName: \"kubernetes.io/projected/64ed590e-59b6-44c8-baee-324162d099b8-kube-api-access-b5xfs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k98c8\" (UID: \"64ed590e-59b6-44c8-baee-324162d099b8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.076137 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpnwv\" (UniqueName: \"kubernetes.io/projected/aa5063d7-2358-4149-a3b9-ef2ce138faf4-kube-api-access-bpnwv\") pod \"swift-operator-controller-manager-68f46476f-t44nb\" (UID: \"aa5063d7-2358-4149-a3b9-ef2ce138faf4\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-t44nb" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.076160 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbcd5\" (UniqueName: \"kubernetes.io/projected/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-kube-api-access-rbcd5\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.109733 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpnwv\" (UniqueName: \"kubernetes.io/projected/aa5063d7-2358-4149-a3b9-ef2ce138faf4-kube-api-access-bpnwv\") pod \"swift-operator-controller-manager-68f46476f-t44nb\" (UID: \"aa5063d7-2358-4149-a3b9-ef2ce138faf4\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-t44nb" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.112424 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vbnk\" (UniqueName: \"kubernetes.io/projected/9c5af548-c722-4e6b-9309-1420838257e0-kube-api-access-9vbnk\") pod \"watcher-operator-controller-manager-798847869b-dlmvg\" (UID: \"9c5af548-c722-4e6b-9309-1420838257e0\") " pod="openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.114133 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jvv8\" (UniqueName: \"kubernetes.io/projected/69b7e96d-bce6-4653-998e-3bf5d159ae5a-kube-api-access-4jvv8\") pod \"test-operator-controller-manager-7866795846-pw9kt\" (UID: \"69b7e96d-bce6-4653-998e-3bf5d159ae5a\") " pod="openstack-operators/test-operator-controller-manager-7866795846-pw9kt" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.114482 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8pz6\" (UniqueName: \"kubernetes.io/projected/eaed166e-39b5-45ca-8a65-a22710d5fe37-kube-api-access-m8pz6\") pod \"telemetry-operator-controller-manager-7f45b4ff68-px9zx\" (UID: \"eaed166e-39b5-45ca-8a65-a22710d5fe37\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-px9zx" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.131412 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.171792 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-z5fb9"] Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.181258 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbcd5\" (UniqueName: \"kubernetes.io/projected/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-kube-api-access-rbcd5\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.181378 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.181419 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.181473 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5xfs\" (UniqueName: \"kubernetes.io/projected/64ed590e-59b6-44c8-baee-324162d099b8-kube-api-access-b5xfs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k98c8\" (UID: \"64ed590e-59b6-44c8-baee-324162d099b8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.181976 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr" Feb 19 15:25:49 crc kubenswrapper[4810]: E0219 15:25:49.182637 4810 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 15:25:49 crc kubenswrapper[4810]: E0219 15:25:49.182681 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs podName:a6f83f3c-26f4-472f-9fcd-ae8049f1819a nodeName:}" failed. No retries permitted until 2026-02-19 15:25:49.682663475 +0000 UTC m=+979.164693599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs") pod "openstack-operator-controller-manager-6d464797d7-lrlqc" (UID: "a6f83f3c-26f4-472f-9fcd-ae8049f1819a") : secret "webhook-server-cert" not found Feb 19 15:25:49 crc kubenswrapper[4810]: E0219 15:25:49.182855 4810 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 15:25:49 crc kubenswrapper[4810]: E0219 15:25:49.182882 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs podName:a6f83f3c-26f4-472f-9fcd-ae8049f1819a nodeName:}" failed. No retries permitted until 2026-02-19 15:25:49.6828738 +0000 UTC m=+979.164903924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs") pod "openstack-operator-controller-manager-6d464797d7-lrlqc" (UID: "a6f83f3c-26f4-472f-9fcd-ae8049f1819a") : secret "metrics-server-cert" not found Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.207186 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbcd5\" (UniqueName: \"kubernetes.io/projected/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-kube-api-access-rbcd5\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.207515 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5xfs\" (UniqueName: \"kubernetes.io/projected/64ed590e-59b6-44c8-baee-324162d099b8-kube-api-access-b5xfs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-k98c8\" (UID: \"64ed590e-59b6-44c8-baee-324162d099b8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.210078 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-t44nb" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.243700 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-px9zx" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.325211 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.346452 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-pw9kt" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.367242 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.472639 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.503291 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt\" (UID: \"c677bdd0-7248-4b02-9ab4-035c034a976a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:25:49 crc kubenswrapper[4810]: E0219 15:25:49.503562 4810 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 15:25:49 crc kubenswrapper[4810]: E0219 15:25:49.503633 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert podName:c677bdd0-7248-4b02-9ab4-035c034a976a nodeName:}" failed. No retries permitted until 2026-02-19 15:25:50.503614792 +0000 UTC m=+979.985644916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" (UID: "c677bdd0-7248-4b02-9ab4-035c034a976a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.537488 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.537551 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.537601 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.538218 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c88e0127771a4aa28c6261d9a83da29a3f930023146271a9d942e738f8152ff"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.538287 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://6c88e0127771a4aa28c6261d9a83da29a3f930023146271a9d942e738f8152ff" gracePeriod=600 Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.709073 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.709158 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:49 crc kubenswrapper[4810]: E0219 15:25:49.709275 4810 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 15:25:49 crc kubenswrapper[4810]: E0219 15:25:49.709342 4810 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 15:25:49 crc kubenswrapper[4810]: E0219 15:25:49.709356 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs podName:a6f83f3c-26f4-472f-9fcd-ae8049f1819a nodeName:}" failed. No retries permitted until 2026-02-19 15:25:50.709336325 +0000 UTC m=+980.191366439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs") pod "openstack-operator-controller-manager-6d464797d7-lrlqc" (UID: "a6f83f3c-26f4-472f-9fcd-ae8049f1819a") : secret "webhook-server-cert" not found Feb 19 15:25:49 crc kubenswrapper[4810]: E0219 15:25:49.709400 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs podName:a6f83f3c-26f4-472f-9fcd-ae8049f1819a nodeName:}" failed. No retries permitted until 2026-02-19 15:25:50.709382916 +0000 UTC m=+980.191413160 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs") pod "openstack-operator-controller-manager-6d464797d7-lrlqc" (UID: "a6f83f3c-26f4-472f-9fcd-ae8049f1819a") : secret "metrics-server-cert" not found Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.921417 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gnmlp"] Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.921902 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.921917 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.940042 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-jxmt5"] Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.969794 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-qz68t"] Feb 19 15:25:49 crc kubenswrapper[4810]: I0219 15:25:49.974871 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-ffm66"] Feb 19 15:25:49 crc kubenswrapper[4810]: W0219 15:25:49.983602 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1217b757_0f1c_4c4e_9abe_55875992915d.slice/crio-e927dcfa354e3551ae1e703ad4c42cb53f27b54cd823cc4b65323cbfec627b59 WatchSource:0}: Error finding container e927dcfa354e3551ae1e703ad4c42cb53f27b54cd823cc4b65323cbfec627b59: Status 404 returned error can't find the container with id e927dcfa354e3551ae1e703ad4c42cb53f27b54cd823cc4b65323cbfec627b59 Feb 19 15:25:50 crc kubenswrapper[4810]: W0219 15:25:50.002408 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2106e7b5_bb83_464a_a43f_943f22b55078.slice/crio-8142761ba1796805ba5943624af056a3f22412cc0742809da1890e37bd53b992 WatchSource:0}: Error finding container 8142761ba1796805ba5943624af056a3f22412cc0742809da1890e37bd53b992: Status 404 returned error can't find the container with id 8142761ba1796805ba5943624af056a3f22412cc0742809da1890e37bd53b992 Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.004729 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.019019 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert\") pod \"infra-operator-controller-manager-79d975b745-2kkhl\" (UID: \"4898d4eb-d474-44bc-9a38-e36f300d132f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.019852 4810 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.019895 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert podName:4898d4eb-d474-44bc-9a38-e36f300d132f nodeName:}" failed. No retries permitted until 2026-02-19 15:25:52.019881257 +0000 UTC m=+981.501911381 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert") pod "infra-operator-controller-manager-79d975b745-2kkhl" (UID: "4898d4eb-d474-44bc-9a38-e36f300d132f") : secret "infra-operator-webhook-server-cert" not found Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.161692 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="6c88e0127771a4aa28c6261d9a83da29a3f930023146271a9d942e738f8152ff" exitCode=0 Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.161769 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"6c88e0127771a4aa28c6261d9a83da29a3f930023146271a9d942e738f8152ff"} Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.161797 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"37fe95e370faa9fca4a69499713730a8ba7e7939f57cd237ea9a505f9b09a6bf"} Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.161813 4810 scope.go:117] "RemoveContainer" containerID="946b41284ed03248aebd830c7fb80426be59078e4ea2a93cd09930514fedec98" Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.165252 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ffm66" event={"ID":"e2942952-ce19-4053-91da-05623c954167","Type":"ContainerStarted","Data":"4b5dde9b2f2c3ebd501b53c61f111850cace1531e9983fd3cf0b82595113eb42"} Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.167538 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gnmlp" event={"ID":"f0ab3643-d267-4902-af1f-cbcbdd7e5e41","Type":"ContainerStarted","Data":"32d8aeaba9ee2ad0c0ef0e695b47b2c106e399b015e2a40387b717f698edf72c"} Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.169050 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qz68t" event={"ID":"2106e7b5-bb83-464a-a43f-943f22b55078","Type":"ContainerStarted","Data":"8142761ba1796805ba5943624af056a3f22412cc0742809da1890e37bd53b992"} Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.170171 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jxmt5" event={"ID":"1217b757-0f1c-4c4e-9abe-55875992915d","Type":"ContainerStarted","Data":"e927dcfa354e3551ae1e703ad4c42cb53f27b54cd823cc4b65323cbfec627b59"} Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.172924 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-z5fb9" event={"ID":"52bb990c-eff0-4673-be27-d55d433bef0d","Type":"ContainerStarted","Data":"0440702e1248976a58eed4017e3d4c545485a092c7440f0bde7e0efe97d4256a"} Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.173463 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-vc7cw"] Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.181247 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-mzslt"] Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.189724 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-nnps5"] Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.257667 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.328906 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2"] Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.362508 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gx7qf"] Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.498478 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-px9zx"] Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.509306 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd"] Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.517915 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq"] Feb 19 15:25:50 crc kubenswrapper[4810]: W0219 15:25:50.518085 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaed166e_39b5_45ca_8a65_a22710d5fe37.slice/crio-8c2e63627629d8e721a1e357e303b313dc5f495a90ce248f16c8662b394c1381 WatchSource:0}: Error finding container 8c2e63627629d8e721a1e357e303b313dc5f495a90ce248f16c8662b394c1381: Status 404 returned error can't find the container with id 8c2e63627629d8e721a1e357e303b313dc5f495a90ce248f16c8662b394c1381 Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.529519 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt\" (UID: \"c677bdd0-7248-4b02-9ab4-035c034a976a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.529648 4810 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.529693 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert podName:c677bdd0-7248-4b02-9ab4-035c034a976a nodeName:}" failed. No retries permitted until 2026-02-19 15:25:52.529678483 +0000 UTC m=+982.011708607 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" (UID: "c677bdd0-7248-4b02-9ab4-035c034a976a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.535003 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc"] Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.557239 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd"] Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.581376 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-vcbwg"] Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.586716 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-t44nb"] Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.595627 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-pw9kt"] Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.599626 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qslf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-mkfsc_openstack-operators(2126b31b-0444-43e4-a250-837f37d476aa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.599863 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2j7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-6fqmd_openstack-operators(e4a54646-39cf-4e42-9367-487ea4f7d8a4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.600832 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc" podUID="2126b31b-0444-43e4-a250-837f37d476aa" Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.600923 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd" podUID="e4a54646-39cf-4e42-9367-487ea4f7d8a4" Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.612857 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jvv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-pw9kt_openstack-operators(69b7e96d-bce6-4653-998e-3bf5d159ae5a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.614477 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-pw9kt" podUID="69b7e96d-bce6-4653-998e-3bf5d159ae5a" Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.632403 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8"] Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.638429 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg"] Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.644870 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5xfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-k98c8_openstack-operators(64ed590e-59b6-44c8-baee-324162d099b8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.644904 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr"] Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.644990 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bwc4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-7tzvr_openstack-operators(9f5779a5-4cda-40dc-831d-950f97eae317): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.651527 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr" podUID="9f5779a5-4cda-40dc-831d-950f97eae317" Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.651940 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8" podUID="64ed590e-59b6-44c8-baee-324162d099b8" Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.654089 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.159:5001/openstack-k8s-operators/watcher-operator:eaf82eeed7c641cca4b0e467ff9bfd7468ff8986,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9vbnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-798847869b-dlmvg_openstack-operators(9c5af548-c722-4e6b-9309-1420838257e0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.664682 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg" podUID="9c5af548-c722-4e6b-9309-1420838257e0" Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.738304 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:50 crc kubenswrapper[4810]: I0219 15:25:50.738630 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.738484 4810 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.738931 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs podName:a6f83f3c-26f4-472f-9fcd-ae8049f1819a nodeName:}" failed. No retries permitted until 2026-02-19 15:25:52.738897391 +0000 UTC m=+982.220927515 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs") pod "openstack-operator-controller-manager-6d464797d7-lrlqc" (UID: "a6f83f3c-26f4-472f-9fcd-ae8049f1819a") : secret "webhook-server-cert" not found Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.738738 4810 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 15:25:50 crc kubenswrapper[4810]: E0219 15:25:50.739034 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs podName:a6f83f3c-26f4-472f-9fcd-ae8049f1819a nodeName:}" failed. No retries permitted until 2026-02-19 15:25:52.739003224 +0000 UTC m=+982.221033348 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs") pod "openstack-operator-controller-manager-6d464797d7-lrlqc" (UID: "a6f83f3c-26f4-472f-9fcd-ae8049f1819a") : secret "metrics-server-cert" not found Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.182890 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vc7cw" event={"ID":"942f40af-0498-4865-99da-bdcd068ef449","Type":"ContainerStarted","Data":"3f797ad07e482ce11b7c40c6e488a42e9ea2aefe5141155ec3794b7d4d2a5a8c"} Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.186371 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg" event={"ID":"9c5af548-c722-4e6b-9309-1420838257e0","Type":"ContainerStarted","Data":"0e77b12bf32788aa02d3e12d5d6283c778d943ced5271ce7d019bc9c50adb4a8"} Feb 19 15:25:51 crc kubenswrapper[4810]: E0219 15:25:51.188481 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.159:5001/openstack-k8s-operators/watcher-operator:eaf82eeed7c641cca4b0e467ff9bfd7468ff8986\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg" podUID="9c5af548-c722-4e6b-9309-1420838257e0" Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.189438 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-nnps5" event={"ID":"602535d1-0abe-471e-8409-31319af7bd4b","Type":"ContainerStarted","Data":"881a33f534ffe0348a7a3429879983d7cc53f438504535993993d815d773a667"} Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.191290 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd" event={"ID":"e163eac0-ea1f-4002-9469-844240d7a44c","Type":"ContainerStarted","Data":"3ffbe6cbb668d6a79a8bfff03336b81154cc1684b69a73b1785f43fe8919f5be"} Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.192811 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-mzslt" event={"ID":"91002269-9fe0-44d2-9dbd-9e4cf58274bf","Type":"ContainerStarted","Data":"092e3ea171bf1a38e58c0bd2fa8536b28b188efc4ad52027f6615bb2b32a399a"} Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.193942 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq" event={"ID":"fef4ad67-ccc8-4b32-bfb8-38dd6aa8e07e","Type":"ContainerStarted","Data":"ef92e7f14f62a52fd19b2c102fce3b203f36bb9c4eee9a1536a90b2df20fa0ea"} Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.194820 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8" event={"ID":"64ed590e-59b6-44c8-baee-324162d099b8","Type":"ContainerStarted","Data":"5551bf3b1a44fbda301ac3a339c869610be99e0a1b318b669ea8c1f969221dc6"} Feb 19 15:25:51 crc kubenswrapper[4810]: E0219 15:25:51.195912 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8" podUID="64ed590e-59b6-44c8-baee-324162d099b8" Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.196394 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-t44nb" event={"ID":"aa5063d7-2358-4149-a3b9-ef2ce138faf4","Type":"ContainerStarted","Data":"3d1135f0c722109ff280553923d9cbc96e6a38aa71e2d08dafb98985306203ff"} Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.205532 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-vcbwg" event={"ID":"3cff7e0c-86c7-4029-aefe-a7f7e8e2d76d","Type":"ContainerStarted","Data":"5e33d80ac78489e01a110a17e7af94af4091b7ba46319e949ba8900d9738fc32"} Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.208443 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr" event={"ID":"9f5779a5-4cda-40dc-831d-950f97eae317","Type":"ContainerStarted","Data":"c147950560919e5e5a88c8ec7a2c488fe3860e63e024634b661022f3822f5035"} Feb 19 15:25:51 crc kubenswrapper[4810]: E0219 15:25:51.214427 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr" podUID="9f5779a5-4cda-40dc-831d-950f97eae317" Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.219444 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-px9zx" event={"ID":"eaed166e-39b5-45ca-8a65-a22710d5fe37","Type":"ContainerStarted","Data":"8c2e63627629d8e721a1e357e303b313dc5f495a90ce248f16c8662b394c1381"} Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.225264 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2" event={"ID":"4aeb7cba-c1db-4dd6-92f7-dae7bd2e3f65","Type":"ContainerStarted","Data":"d84dc9bc93636cd29ea80b1f2ad3891e4865821023bb19dd507c583b13389a5c"} Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.237302 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc" event={"ID":"2126b31b-0444-43e4-a250-837f37d476aa","Type":"ContainerStarted","Data":"466e59eac13e04b3cf318adee9db4576b2c7c1eb6e6a6f7837da7664ae9ff9cf"} Feb 19 15:25:51 crc kubenswrapper[4810]: E0219 15:25:51.240345 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc" podUID="2126b31b-0444-43e4-a250-837f37d476aa" Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.241944 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd" event={"ID":"e4a54646-39cf-4e42-9367-487ea4f7d8a4","Type":"ContainerStarted","Data":"b89b1c622c6e37cace90f517d6acafdefbc800bc06d9777043b717525f34ebd6"} Feb 19 15:25:51 crc kubenswrapper[4810]: E0219 15:25:51.243512 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd" podUID="e4a54646-39cf-4e42-9367-487ea4f7d8a4" Feb 19 15:25:51 crc kubenswrapper[4810]: I0219 15:25:51.245520 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-pw9kt" event={"ID":"69b7e96d-bce6-4653-998e-3bf5d159ae5a","Type":"ContainerStarted","Data":"be03b5fdf3aec4df166c21467084dad324575f6f427865e00b905c6b2b049090"} Feb 19 15:25:51 crc kubenswrapper[4810]: E0219 15:25:51.255878 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-pw9kt" podUID="69b7e96d-bce6-4653-998e-3bf5d159ae5a" Feb 19 15:25:52 crc kubenswrapper[4810]: I0219 15:25:52.061127 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert\") pod \"infra-operator-controller-manager-79d975b745-2kkhl\" (UID: \"4898d4eb-d474-44bc-9a38-e36f300d132f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.061400 4810 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.061664 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert podName:4898d4eb-d474-44bc-9a38-e36f300d132f nodeName:}" failed. No retries permitted until 2026-02-19 15:25:56.061525481 +0000 UTC m=+985.543555605 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert") pod "infra-operator-controller-manager-79d975b745-2kkhl" (UID: "4898d4eb-d474-44bc-9a38-e36f300d132f") : secret "infra-operator-webhook-server-cert" not found Feb 19 15:25:52 crc kubenswrapper[4810]: I0219 15:25:52.261735 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gx7qf" podUID="1715cc96-a86a-40d8-8f5a-1a4f35129bd1" containerName="registry-server" containerID="cri-o://5cab9d89ebbb4715343c797d52130d009456c8d3d9eaf887c80836933b581c07" gracePeriod=2 Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.296070 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd" podUID="e4a54646-39cf-4e42-9367-487ea4f7d8a4" Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.296118 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc" podUID="2126b31b-0444-43e4-a250-837f37d476aa" Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.296228 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.159:5001/openstack-k8s-operators/watcher-operator:eaf82eeed7c641cca4b0e467ff9bfd7468ff8986\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg" podUID="9c5af548-c722-4e6b-9309-1420838257e0" Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.296278 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8" podUID="64ed590e-59b6-44c8-baee-324162d099b8" Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.296314 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-pw9kt" podUID="69b7e96d-bce6-4653-998e-3bf5d159ae5a" Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.296366 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr" podUID="9f5779a5-4cda-40dc-831d-950f97eae317" Feb 19 15:25:52 crc kubenswrapper[4810]: I0219 15:25:52.569849 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt\" (UID: \"c677bdd0-7248-4b02-9ab4-035c034a976a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.570035 4810 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.570080 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert podName:c677bdd0-7248-4b02-9ab4-035c034a976a nodeName:}" failed. No retries permitted until 2026-02-19 15:25:56.570067136 +0000 UTC m=+986.052097260 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" (UID: "c677bdd0-7248-4b02-9ab4-035c034a976a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 15:25:52 crc kubenswrapper[4810]: I0219 15:25:52.776156 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:52 crc kubenswrapper[4810]: I0219 15:25:52.776220 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.776266 4810 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.776315 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs podName:a6f83f3c-26f4-472f-9fcd-ae8049f1819a nodeName:}" failed. No retries permitted until 2026-02-19 15:25:56.776300891 +0000 UTC m=+986.258331005 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs") pod "openstack-operator-controller-manager-6d464797d7-lrlqc" (UID: "a6f83f3c-26f4-472f-9fcd-ae8049f1819a") : secret "webhook-server-cert" not found Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.776477 4810 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 15:25:52 crc kubenswrapper[4810]: E0219 15:25:52.776549 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs podName:a6f83f3c-26f4-472f-9fcd-ae8049f1819a nodeName:}" failed. No retries permitted until 2026-02-19 15:25:56.776531007 +0000 UTC m=+986.258561131 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs") pod "openstack-operator-controller-manager-6d464797d7-lrlqc" (UID: "a6f83f3c-26f4-472f-9fcd-ae8049f1819a") : secret "metrics-server-cert" not found Feb 19 15:25:53 crc kubenswrapper[4810]: I0219 15:25:53.285823 4810 generic.go:334] "Generic (PLEG): container finished" podID="1715cc96-a86a-40d8-8f5a-1a4f35129bd1" containerID="5cab9d89ebbb4715343c797d52130d009456c8d3d9eaf887c80836933b581c07" exitCode=0 Feb 19 15:25:53 crc kubenswrapper[4810]: I0219 15:25:53.286104 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gx7qf" event={"ID":"1715cc96-a86a-40d8-8f5a-1a4f35129bd1","Type":"ContainerDied","Data":"5cab9d89ebbb4715343c797d52130d009456c8d3d9eaf887c80836933b581c07"} Feb 19 15:25:55 crc kubenswrapper[4810]: I0219 15:25:55.303369 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gx7qf" event={"ID":"1715cc96-a86a-40d8-8f5a-1a4f35129bd1","Type":"ContainerDied","Data":"c9c6c8774d7ca4a97936ace4183e794c37a4dbb46f2dcfbc1cbf73f1bab26b9b"} Feb 19 15:25:55 crc kubenswrapper[4810]: I0219 15:25:55.304023 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c6c8774d7ca4a97936ace4183e794c37a4dbb46f2dcfbc1cbf73f1bab26b9b" Feb 19 15:25:55 crc kubenswrapper[4810]: I0219 15:25:55.323788 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:55 crc kubenswrapper[4810]: I0219 15:25:55.413093 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpjhx\" (UniqueName: \"kubernetes.io/projected/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-kube-api-access-tpjhx\") pod \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\" (UID: \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\") " Feb 19 15:25:55 crc kubenswrapper[4810]: I0219 15:25:55.413179 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-utilities\") pod \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\" (UID: \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\") " Feb 19 15:25:55 crc kubenswrapper[4810]: I0219 15:25:55.413270 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-catalog-content\") pod \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\" (UID: \"1715cc96-a86a-40d8-8f5a-1a4f35129bd1\") " Feb 19 15:25:55 crc kubenswrapper[4810]: I0219 15:25:55.414297 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-utilities" (OuterVolumeSpecName: "utilities") pod "1715cc96-a86a-40d8-8f5a-1a4f35129bd1" (UID: "1715cc96-a86a-40d8-8f5a-1a4f35129bd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:25:55 crc kubenswrapper[4810]: I0219 15:25:55.430250 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-kube-api-access-tpjhx" (OuterVolumeSpecName: "kube-api-access-tpjhx") pod "1715cc96-a86a-40d8-8f5a-1a4f35129bd1" (UID: "1715cc96-a86a-40d8-8f5a-1a4f35129bd1"). InnerVolumeSpecName "kube-api-access-tpjhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:25:55 crc kubenswrapper[4810]: I0219 15:25:55.442013 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1715cc96-a86a-40d8-8f5a-1a4f35129bd1" (UID: "1715cc96-a86a-40d8-8f5a-1a4f35129bd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:25:55 crc kubenswrapper[4810]: I0219 15:25:55.514777 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpjhx\" (UniqueName: \"kubernetes.io/projected/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-kube-api-access-tpjhx\") on node \"crc\" DevicePath \"\"" Feb 19 15:25:55 crc kubenswrapper[4810]: I0219 15:25:55.514805 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:25:55 crc kubenswrapper[4810]: I0219 15:25:55.514814 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1715cc96-a86a-40d8-8f5a-1a4f35129bd1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:25:56 crc kubenswrapper[4810]: I0219 15:25:56.122900 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert\") pod \"infra-operator-controller-manager-79d975b745-2kkhl\" (UID: \"4898d4eb-d474-44bc-9a38-e36f300d132f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:25:56 crc kubenswrapper[4810]: E0219 15:25:56.123183 4810 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 15:25:56 crc kubenswrapper[4810]: E0219 15:25:56.123525 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert podName:4898d4eb-d474-44bc-9a38-e36f300d132f nodeName:}" failed. No retries permitted until 2026-02-19 15:26:04.123488836 +0000 UTC m=+993.605519150 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert") pod "infra-operator-controller-manager-79d975b745-2kkhl" (UID: "4898d4eb-d474-44bc-9a38-e36f300d132f") : secret "infra-operator-webhook-server-cert" not found Feb 19 15:25:56 crc kubenswrapper[4810]: I0219 15:25:56.309779 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gx7qf" Feb 19 15:25:56 crc kubenswrapper[4810]: I0219 15:25:56.336887 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gx7qf"] Feb 19 15:25:56 crc kubenswrapper[4810]: I0219 15:25:56.350259 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gx7qf"] Feb 19 15:25:56 crc kubenswrapper[4810]: I0219 15:25:56.631061 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt\" (UID: \"c677bdd0-7248-4b02-9ab4-035c034a976a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:25:56 crc kubenswrapper[4810]: E0219 15:25:56.631258 4810 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 15:25:56 crc kubenswrapper[4810]: E0219 15:25:56.631353 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert podName:c677bdd0-7248-4b02-9ab4-035c034a976a nodeName:}" failed. No retries permitted until 2026-02-19 15:26:04.631336414 +0000 UTC m=+994.113366538 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" (UID: "c677bdd0-7248-4b02-9ab4-035c034a976a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 15:25:56 crc kubenswrapper[4810]: I0219 15:25:56.833717 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:56 crc kubenswrapper[4810]: I0219 15:25:56.833906 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:25:56 crc kubenswrapper[4810]: E0219 15:25:56.834035 4810 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 15:25:56 crc kubenswrapper[4810]: E0219 15:25:56.834094 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs podName:a6f83f3c-26f4-472f-9fcd-ae8049f1819a nodeName:}" failed. No retries permitted until 2026-02-19 15:26:04.834077183 +0000 UTC m=+994.316107317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs") pod "openstack-operator-controller-manager-6d464797d7-lrlqc" (UID: "a6f83f3c-26f4-472f-9fcd-ae8049f1819a") : secret "webhook-server-cert" not found Feb 19 15:25:56 crc kubenswrapper[4810]: E0219 15:25:56.834621 4810 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 15:25:56 crc kubenswrapper[4810]: E0219 15:25:56.834747 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs podName:a6f83f3c-26f4-472f-9fcd-ae8049f1819a nodeName:}" failed. No retries permitted until 2026-02-19 15:26:04.834719419 +0000 UTC m=+994.316749583 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs") pod "openstack-operator-controller-manager-6d464797d7-lrlqc" (UID: "a6f83f3c-26f4-472f-9fcd-ae8049f1819a") : secret "metrics-server-cert" not found Feb 19 15:25:57 crc kubenswrapper[4810]: I0219 15:25:57.448108 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1715cc96-a86a-40d8-8f5a-1a4f35129bd1" path="/var/lib/kubelet/pods/1715cc96-a86a-40d8-8f5a-1a4f35129bd1/volumes" Feb 19 15:26:03 crc kubenswrapper[4810]: E0219 15:26:03.404790 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 19 15:26:03 crc kubenswrapper[4810]: E0219 15:26:03.405730 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dmsf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-5xnwd_openstack-operators(e163eac0-ea1f-4002-9469-844240d7a44c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 15:26:03 crc kubenswrapper[4810]: E0219 15:26:03.406864 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd" podUID="e163eac0-ea1f-4002-9469-844240d7a44c" Feb 19 15:26:03 crc kubenswrapper[4810]: E0219 15:26:03.982654 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 19 15:26:03 crc kubenswrapper[4810]: E0219 15:26:03.982861 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drb5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-jjqv2_openstack-operators(4aeb7cba-c1db-4dd6-92f7-dae7bd2e3f65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 15:26:03 crc kubenswrapper[4810]: E0219 15:26:03.984162 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2" podUID="4aeb7cba-c1db-4dd6-92f7-dae7bd2e3f65" Feb 19 15:26:04 crc kubenswrapper[4810]: I0219 15:26:04.151409 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert\") pod \"infra-operator-controller-manager-79d975b745-2kkhl\" (UID: \"4898d4eb-d474-44bc-9a38-e36f300d132f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:26:04 crc kubenswrapper[4810]: E0219 15:26:04.151573 4810 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 15:26:04 crc kubenswrapper[4810]: E0219 15:26:04.151770 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert podName:4898d4eb-d474-44bc-9a38-e36f300d132f nodeName:}" failed. No retries permitted until 2026-02-19 15:26:20.151750481 +0000 UTC m=+1009.633780605 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert") pod "infra-operator-controller-manager-79d975b745-2kkhl" (UID: "4898d4eb-d474-44bc-9a38-e36f300d132f") : secret "infra-operator-webhook-server-cert" not found Feb 19 15:26:04 crc kubenswrapper[4810]: E0219 15:26:04.381160 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd" podUID="e163eac0-ea1f-4002-9469-844240d7a44c" Feb 19 15:26:04 crc kubenswrapper[4810]: E0219 15:26:04.381450 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2" podUID="4aeb7cba-c1db-4dd6-92f7-dae7bd2e3f65" Feb 19 15:26:04 crc kubenswrapper[4810]: I0219 15:26:04.659259 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt\" (UID: \"c677bdd0-7248-4b02-9ab4-035c034a976a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:26:04 crc kubenswrapper[4810]: I0219 15:26:04.665899 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c677bdd0-7248-4b02-9ab4-035c034a976a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt\" (UID: \"c677bdd0-7248-4b02-9ab4-035c034a976a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:26:04 crc kubenswrapper[4810]: I0219 15:26:04.758368 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:26:04 crc kubenswrapper[4810]: I0219 15:26:04.861983 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:26:04 crc kubenswrapper[4810]: I0219 15:26:04.862233 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:26:04 crc kubenswrapper[4810]: E0219 15:26:04.862427 4810 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 15:26:04 crc kubenswrapper[4810]: E0219 15:26:04.862445 4810 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 15:26:04 crc kubenswrapper[4810]: E0219 15:26:04.862521 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs podName:a6f83f3c-26f4-472f-9fcd-ae8049f1819a nodeName:}" failed. No retries permitted until 2026-02-19 15:26:20.862500373 +0000 UTC m=+1010.344530497 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs") pod "openstack-operator-controller-manager-6d464797d7-lrlqc" (UID: "a6f83f3c-26f4-472f-9fcd-ae8049f1819a") : secret "metrics-server-cert" not found Feb 19 15:26:04 crc kubenswrapper[4810]: E0219 15:26:04.862542 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs podName:a6f83f3c-26f4-472f-9fcd-ae8049f1819a nodeName:}" failed. No retries permitted until 2026-02-19 15:26:20.862535454 +0000 UTC m=+1010.344565578 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs") pod "openstack-operator-controller-manager-6d464797d7-lrlqc" (UID: "a6f83f3c-26f4-472f-9fcd-ae8049f1819a") : secret "webhook-server-cert" not found Feb 19 15:26:06 crc kubenswrapper[4810]: E0219 15:26:06.777086 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 19 15:26:06 crc kubenswrapper[4810]: E0219 15:26:06.777645 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jj652,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-l67cq_openstack-operators(fef4ad67-ccc8-4b32-bfb8-38dd6aa8e07e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 15:26:06 crc kubenswrapper[4810]: E0219 15:26:06.778843 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq" podUID="fef4ad67-ccc8-4b32-bfb8-38dd6aa8e07e" Feb 19 15:26:07 crc kubenswrapper[4810]: E0219 15:26:07.404633 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq" podUID="fef4ad67-ccc8-4b32-bfb8-38dd6aa8e07e" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.491279 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt"] Feb 19 15:26:10 crc kubenswrapper[4810]: W0219 15:26:10.532394 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc677bdd0_7248_4b02_9ab4_035c034a976a.slice/crio-9bd911d4191f9996bb14d48fcb4c64180a01369afca29b1071edfda5c752d520 WatchSource:0}: Error finding container 9bd911d4191f9996bb14d48fcb4c64180a01369afca29b1071edfda5c752d520: Status 404 returned error can't find the container with id 9bd911d4191f9996bb14d48fcb4c64180a01369afca29b1071edfda5c752d520 Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.636677 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-t44nb" event={"ID":"aa5063d7-2358-4149-a3b9-ef2ce138faf4","Type":"ContainerStarted","Data":"77b98229ac7d278eb7f5b1bf02ecc266acdc3a05ff1fc7a15da7839f41af2323"} Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.637422 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-t44nb" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.644093 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" event={"ID":"c677bdd0-7248-4b02-9ab4-035c034a976a","Type":"ContainerStarted","Data":"9bd911d4191f9996bb14d48fcb4c64180a01369afca29b1071edfda5c752d520"} Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.652638 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-px9zx" event={"ID":"eaed166e-39b5-45ca-8a65-a22710d5fe37","Type":"ContainerStarted","Data":"2133399ff4508201c9762a969ef6cdc5dcdaf21e1b3d417e2b4091e39f44dd8a"} Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.653214 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-px9zx" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.655870 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-nnps5" event={"ID":"602535d1-0abe-471e-8409-31319af7bd4b","Type":"ContainerStarted","Data":"5b98fc5826828b6e5d6d1564bd1ce17f85406817dd488f3a9439a484196dca50"} Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.655934 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-nnps5" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.661315 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-z5fb9" event={"ID":"52bb990c-eff0-4673-be27-d55d433bef0d","Type":"ContainerStarted","Data":"a6166d3f7b76c1da4fec83b36c9ae28da0845b52cb4080a2f04785b2cf1903aa"} Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.661609 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-z5fb9" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.663872 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-t44nb" podStartSLOduration=5.965970663 podStartE2EDuration="22.663858884s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.589590562 +0000 UTC m=+980.071620676" lastFinishedPulling="2026-02-19 15:26:07.287478773 +0000 UTC m=+996.769508897" observedRunningTime="2026-02-19 15:26:10.661369023 +0000 UTC m=+1000.143399147" watchObservedRunningTime="2026-02-19 15:26:10.663858884 +0000 UTC m=+1000.145889008" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.668617 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vc7cw" event={"ID":"942f40af-0498-4865-99da-bdcd068ef449","Type":"ContainerStarted","Data":"8690214eca9f1721e0c93fceded3db6194005ed818aa1a86353f6496525afdf6"} Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.669231 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vc7cw" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.671960 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gnmlp" event={"ID":"f0ab3643-d267-4902-af1f-cbcbdd7e5e41","Type":"ContainerStarted","Data":"3de5d7267245117e1c54bdc12c8f34834af0724c0d5220646ecbb63186b0eab0"} Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.672246 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gnmlp" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.674089 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qz68t" event={"ID":"2106e7b5-bb83-464a-a43f-943f22b55078","Type":"ContainerStarted","Data":"6155112dba7f89415f14e7a11f5d302419cd73cf8867aa21b8ddf088aa99bd22"} Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.674500 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qz68t" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.675796 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-mzslt" event={"ID":"91002269-9fe0-44d2-9dbd-9e4cf58274bf","Type":"ContainerStarted","Data":"55c700b041bb863e91e04aba3e853e32c1bfb2457f6dbc001a8a7c7344d0d13a"} Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.676179 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-mzslt" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.691061 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-nnps5" podStartSLOduration=5.697753038 podStartE2EDuration="22.69104145s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.294240892 +0000 UTC m=+979.776271016" lastFinishedPulling="2026-02-19 15:26:07.287529304 +0000 UTC m=+996.769559428" observedRunningTime="2026-02-19 15:26:10.676555515 +0000 UTC m=+1000.158585639" watchObservedRunningTime="2026-02-19 15:26:10.69104145 +0000 UTC m=+1000.173071574" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.693395 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jxmt5" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.712129 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-px9zx" podStartSLOduration=5.985632864 podStartE2EDuration="22.712111766s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.561005651 +0000 UTC m=+980.043035775" lastFinishedPulling="2026-02-19 15:26:07.287484553 +0000 UTC m=+996.769514677" observedRunningTime="2026-02-19 15:26:10.706687013 +0000 UTC m=+1000.188717137" watchObservedRunningTime="2026-02-19 15:26:10.712111766 +0000 UTC m=+1000.194141890" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.769697 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qz68t" podStartSLOduration=5.491500272 podStartE2EDuration="22.769677397s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.008114369 +0000 UTC m=+979.490144493" lastFinishedPulling="2026-02-19 15:26:07.286291484 +0000 UTC m=+996.768321618" observedRunningTime="2026-02-19 15:26:10.765862504 +0000 UTC m=+1000.247892638" watchObservedRunningTime="2026-02-19 15:26:10.769677397 +0000 UTC m=+1000.251707521" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.771355 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-mzslt" podStartSLOduration=5.777916303 podStartE2EDuration="22.771348168s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.293817522 +0000 UTC m=+979.775847646" lastFinishedPulling="2026-02-19 15:26:07.287249387 +0000 UTC m=+996.769279511" observedRunningTime="2026-02-19 15:26:10.734722051 +0000 UTC m=+1000.216752165" watchObservedRunningTime="2026-02-19 15:26:10.771348168 +0000 UTC m=+1000.253378292" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.798293 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jxmt5" podStartSLOduration=5.505426933 podStartE2EDuration="22.798273478s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:49.997379125 +0000 UTC m=+979.479409249" lastFinishedPulling="2026-02-19 15:26:07.29022567 +0000 UTC m=+996.772255794" observedRunningTime="2026-02-19 15:26:10.797439048 +0000 UTC m=+1000.279469172" watchObservedRunningTime="2026-02-19 15:26:10.798273478 +0000 UTC m=+1000.280303602" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.820486 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vc7cw" podStartSLOduration=6.873071676 podStartE2EDuration="22.820467382s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.223860267 +0000 UTC m=+979.705890391" lastFinishedPulling="2026-02-19 15:26:06.171255973 +0000 UTC m=+995.653286097" observedRunningTime="2026-02-19 15:26:10.81957226 +0000 UTC m=+1000.301602384" watchObservedRunningTime="2026-02-19 15:26:10.820467382 +0000 UTC m=+1000.302497506" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.850547 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gnmlp" podStartSLOduration=6.64435365 podStartE2EDuration="22.850519509s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:49.965122995 +0000 UTC m=+979.447153109" lastFinishedPulling="2026-02-19 15:26:06.171288854 +0000 UTC m=+995.653318968" observedRunningTime="2026-02-19 15:26:10.841919608 +0000 UTC m=+1000.323949732" watchObservedRunningTime="2026-02-19 15:26:10.850519509 +0000 UTC m=+1000.332549633" Feb 19 15:26:10 crc kubenswrapper[4810]: I0219 15:26:10.870936 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-z5fb9" podStartSLOduration=4.908254105 podStartE2EDuration="22.870920699s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:49.324968723 +0000 UTC m=+978.806998847" lastFinishedPulling="2026-02-19 15:26:07.287635317 +0000 UTC m=+996.769665441" observedRunningTime="2026-02-19 15:26:10.870068068 +0000 UTC m=+1000.352098192" watchObservedRunningTime="2026-02-19 15:26:10.870920699 +0000 UTC m=+1000.352950813" Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.702466 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd" event={"ID":"e4a54646-39cf-4e42-9367-487ea4f7d8a4","Type":"ContainerStarted","Data":"a7f4502d6116a297e20819226e83675436d5f60415c41d5b045a650e129aa64e"} Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.702951 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd" Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.711536 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-vcbwg" event={"ID":"3cff7e0c-86c7-4029-aefe-a7f7e8e2d76d","Type":"ContainerStarted","Data":"cc0439d25c9995993839b0b44bd658156506ccca4d0f014da53ea8ee3103f32c"} Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.711637 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-vcbwg" Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.720293 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg" event={"ID":"9c5af548-c722-4e6b-9309-1420838257e0","Type":"ContainerStarted","Data":"fbbd153a52771d3602f93b40766d5c0ad9da72d9fd0e23c48ce28db0bada5a16"} Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.720537 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg" Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.724532 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd" podStartSLOduration=4.2696109 podStartE2EDuration="23.724516342s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.59973235 +0000 UTC m=+980.081762474" lastFinishedPulling="2026-02-19 15:26:10.054637792 +0000 UTC m=+999.536667916" observedRunningTime="2026-02-19 15:26:11.720197506 +0000 UTC m=+1001.202227630" watchObservedRunningTime="2026-02-19 15:26:11.724516342 +0000 UTC m=+1001.206546466" Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.736174 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8" event={"ID":"64ed590e-59b6-44c8-baee-324162d099b8","Type":"ContainerStarted","Data":"d7e363cde92e18a78213fab06f57688d9039d3aad2d3af3c9dc21850d34bf557"} Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.738416 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-vcbwg" podStartSLOduration=7.018776479 podStartE2EDuration="23.738403343s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.567721646 +0000 UTC m=+980.049751770" lastFinishedPulling="2026-02-19 15:26:07.28734851 +0000 UTC m=+996.769378634" observedRunningTime="2026-02-19 15:26:11.735544313 +0000 UTC m=+1001.217574437" watchObservedRunningTime="2026-02-19 15:26:11.738403343 +0000 UTC m=+1001.220433467" Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.748482 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jxmt5" event={"ID":"1217b757-0f1c-4c4e-9abe-55875992915d","Type":"ContainerStarted","Data":"071ca87e7010f5b69a9cae2d7f3af2f9b9a955fcc849110d9ce02549a2eb9920"} Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.756432 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ffm66" event={"ID":"e2942952-ce19-4053-91da-05623c954167","Type":"ContainerStarted","Data":"7a94575fd061fa30dda9fc0e5ed2f7e123fa39aef8efd06a40c09fd2e63cd95a"} Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.756963 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ffm66" Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.758873 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc" event={"ID":"2126b31b-0444-43e4-a250-837f37d476aa","Type":"ContainerStarted","Data":"c09d1663a8f2d2892250866e96e5d5843f2e5980c3c02f51cfcdaa723c017a6f"} Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.759305 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc" Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.775810 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg" podStartSLOduration=4.182789024 podStartE2EDuration="23.775792499s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.65398032 +0000 UTC m=+980.136010444" lastFinishedPulling="2026-02-19 15:26:10.246983795 +0000 UTC m=+999.729013919" observedRunningTime="2026-02-19 15:26:11.774846046 +0000 UTC m=+1001.256876170" watchObservedRunningTime="2026-02-19 15:26:11.775792499 +0000 UTC m=+1001.257822623" Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.813986 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-k98c8" podStartSLOduration=4.276725097 podStartE2EDuration="23.813967755s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.644743934 +0000 UTC m=+980.126774048" lastFinishedPulling="2026-02-19 15:26:10.181986582 +0000 UTC m=+999.664016706" observedRunningTime="2026-02-19 15:26:11.812374686 +0000 UTC m=+1001.294404830" watchObservedRunningTime="2026-02-19 15:26:11.813967755 +0000 UTC m=+1001.295997879" Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.834970 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc" podStartSLOduration=4.377792452 podStartE2EDuration="23.834955889s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.599372011 +0000 UTC m=+980.081402135" lastFinishedPulling="2026-02-19 15:26:10.056535448 +0000 UTC m=+999.538565572" observedRunningTime="2026-02-19 15:26:11.832580371 +0000 UTC m=+1001.314610485" watchObservedRunningTime="2026-02-19 15:26:11.834955889 +0000 UTC m=+1001.316986013" Feb 19 15:26:11 crc kubenswrapper[4810]: I0219 15:26:11.852490 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ffm66" podStartSLOduration=6.566986155 podStartE2EDuration="23.852475089s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.002089561 +0000 UTC m=+979.484119685" lastFinishedPulling="2026-02-19 15:26:07.287578465 +0000 UTC m=+996.769608619" observedRunningTime="2026-02-19 15:26:11.84842882 +0000 UTC m=+1001.330458944" watchObservedRunningTime="2026-02-19 15:26:11.852475089 +0000 UTC m=+1001.334505213" Feb 19 15:26:15 crc kubenswrapper[4810]: I0219 15:26:15.789662 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" event={"ID":"c677bdd0-7248-4b02-9ab4-035c034a976a","Type":"ContainerStarted","Data":"462bac6d4d85d2f1f193db02df486b4786815b87c40f6d1ac63a7e3d15f3e30b"} Feb 19 15:26:15 crc kubenswrapper[4810]: I0219 15:26:15.790184 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:26:15 crc kubenswrapper[4810]: I0219 15:26:15.792407 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-pw9kt" event={"ID":"69b7e96d-bce6-4653-998e-3bf5d159ae5a","Type":"ContainerStarted","Data":"fd39e0f422569c85b060c3784b9b78c986335f90a333f906a31498c40b3c9346"} Feb 19 15:26:15 crc kubenswrapper[4810]: I0219 15:26:15.793133 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-pw9kt" Feb 19 15:26:15 crc kubenswrapper[4810]: I0219 15:26:15.795689 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr" event={"ID":"9f5779a5-4cda-40dc-831d-950f97eae317","Type":"ContainerStarted","Data":"619416ce7877401d8fcb6b251da7fdd31bd32f494e99ecbcb555e3ea0b38c198"} Feb 19 15:26:15 crc kubenswrapper[4810]: I0219 15:26:15.795928 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr" Feb 19 15:26:15 crc kubenswrapper[4810]: I0219 15:26:15.819871 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" podStartSLOduration=23.671060802 podStartE2EDuration="27.819853685s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:26:10.535103498 +0000 UTC m=+1000.017133622" lastFinishedPulling="2026-02-19 15:26:14.683896381 +0000 UTC m=+1004.165926505" observedRunningTime="2026-02-19 15:26:15.815245302 +0000 UTC m=+1005.297275426" watchObservedRunningTime="2026-02-19 15:26:15.819853685 +0000 UTC m=+1005.301883809" Feb 19 15:26:15 crc kubenswrapper[4810]: I0219 15:26:15.836704 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr" podStartSLOduration=3.816759192 podStartE2EDuration="27.836687658s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.644928008 +0000 UTC m=+980.126958132" lastFinishedPulling="2026-02-19 15:26:14.664856454 +0000 UTC m=+1004.146886598" observedRunningTime="2026-02-19 15:26:15.834646938 +0000 UTC m=+1005.316677082" watchObservedRunningTime="2026-02-19 15:26:15.836687658 +0000 UTC m=+1005.318717802" Feb 19 15:26:16 crc kubenswrapper[4810]: I0219 15:26:16.805313 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2" event={"ID":"4aeb7cba-c1db-4dd6-92f7-dae7bd2e3f65","Type":"ContainerStarted","Data":"b13e36b05d5ad7005a8d9facb291e614631ad1e73b410622cd4df8e50f40d87c"} Feb 19 15:26:16 crc kubenswrapper[4810]: I0219 15:26:16.805923 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2" Feb 19 15:26:16 crc kubenswrapper[4810]: I0219 15:26:16.825155 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-pw9kt" podStartSLOduration=4.769583177 podStartE2EDuration="28.825130436s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.612719709 +0000 UTC m=+980.094749833" lastFinishedPulling="2026-02-19 15:26:14.668266978 +0000 UTC m=+1004.150297092" observedRunningTime="2026-02-19 15:26:15.853418018 +0000 UTC m=+1005.335448142" watchObservedRunningTime="2026-02-19 15:26:16.825130436 +0000 UTC m=+1006.307160590" Feb 19 15:26:16 crc kubenswrapper[4810]: I0219 15:26:16.833096 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2" podStartSLOduration=3.263895929 podStartE2EDuration="28.833083531s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.401796638 +0000 UTC m=+979.883826762" lastFinishedPulling="2026-02-19 15:26:15.97098423 +0000 UTC m=+1005.453014364" observedRunningTime="2026-02-19 15:26:16.823375733 +0000 UTC m=+1006.305405897" watchObservedRunningTime="2026-02-19 15:26:16.833083531 +0000 UTC m=+1006.315113695" Feb 19 15:26:18 crc kubenswrapper[4810]: I0219 15:26:18.432795 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jxmt5" Feb 19 15:26:18 crc kubenswrapper[4810]: I0219 15:26:18.471787 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-z5fb9" Feb 19 15:26:18 crc kubenswrapper[4810]: I0219 15:26:18.513751 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qz68t" Feb 19 15:26:18 crc kubenswrapper[4810]: I0219 15:26:18.529795 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-ffm66" Feb 19 15:26:18 crc kubenswrapper[4810]: I0219 15:26:18.546856 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gnmlp" Feb 19 15:26:18 crc kubenswrapper[4810]: I0219 15:26:18.604522 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-nnps5" Feb 19 15:26:18 crc kubenswrapper[4810]: I0219 15:26:18.668611 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-mzslt" Feb 19 15:26:18 crc kubenswrapper[4810]: I0219 15:26:18.753366 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vc7cw" Feb 19 15:26:18 crc kubenswrapper[4810]: I0219 15:26:18.825051 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd" event={"ID":"e163eac0-ea1f-4002-9469-844240d7a44c","Type":"ContainerStarted","Data":"20225a486423e9c7cd508a3be1662a87ec4639c953f782c9ba3643ae52196769"} Feb 19 15:26:18 crc kubenswrapper[4810]: I0219 15:26:18.826169 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd" Feb 19 15:26:18 crc kubenswrapper[4810]: I0219 15:26:18.843470 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd" podStartSLOduration=3.257752499 podStartE2EDuration="30.843452527s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.551958769 +0000 UTC m=+980.033988893" lastFinishedPulling="2026-02-19 15:26:18.137658767 +0000 UTC m=+1007.619688921" observedRunningTime="2026-02-19 15:26:18.837786488 +0000 UTC m=+1008.319816612" watchObservedRunningTime="2026-02-19 15:26:18.843452527 +0000 UTC m=+1008.325482651" Feb 19 15:26:18 crc kubenswrapper[4810]: I0219 15:26:18.904236 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-6fqmd" Feb 19 15:26:18 crc kubenswrapper[4810]: I0219 15:26:18.961369 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-mkfsc" Feb 19 15:26:19 crc kubenswrapper[4810]: I0219 15:26:19.061771 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-vcbwg" Feb 19 15:26:19 crc kubenswrapper[4810]: I0219 15:26:19.185857 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-7tzvr" Feb 19 15:26:19 crc kubenswrapper[4810]: I0219 15:26:19.215342 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-t44nb" Feb 19 15:26:19 crc kubenswrapper[4810]: I0219 15:26:19.248707 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-px9zx" Feb 19 15:26:19 crc kubenswrapper[4810]: I0219 15:26:19.350448 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-pw9kt" Feb 19 15:26:19 crc kubenswrapper[4810]: I0219 15:26:19.376626 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-798847869b-dlmvg" Feb 19 15:26:20 crc kubenswrapper[4810]: I0219 15:26:20.201212 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert\") pod \"infra-operator-controller-manager-79d975b745-2kkhl\" (UID: \"4898d4eb-d474-44bc-9a38-e36f300d132f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:26:20 crc kubenswrapper[4810]: I0219 15:26:20.208352 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4898d4eb-d474-44bc-9a38-e36f300d132f-cert\") pod \"infra-operator-controller-manager-79d975b745-2kkhl\" (UID: \"4898d4eb-d474-44bc-9a38-e36f300d132f\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:26:20 crc kubenswrapper[4810]: I0219 15:26:20.413354 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:26:20 crc kubenswrapper[4810]: I0219 15:26:20.686115 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl"] Feb 19 15:26:20 crc kubenswrapper[4810]: W0219 15:26:20.693657 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4898d4eb_d474_44bc_9a38_e36f300d132f.slice/crio-619828b583afcb77dfbcc5aebfc67debe1d29496770cc320fdde59e0df1247e1 WatchSource:0}: Error finding container 619828b583afcb77dfbcc5aebfc67debe1d29496770cc320fdde59e0df1247e1: Status 404 returned error can't find the container with id 619828b583afcb77dfbcc5aebfc67debe1d29496770cc320fdde59e0df1247e1 Feb 19 15:26:20 crc kubenswrapper[4810]: I0219 15:26:20.853307 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" event={"ID":"4898d4eb-d474-44bc-9a38-e36f300d132f","Type":"ContainerStarted","Data":"619828b583afcb77dfbcc5aebfc67debe1d29496770cc320fdde59e0df1247e1"} Feb 19 15:26:20 crc kubenswrapper[4810]: I0219 15:26:20.919302 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:26:20 crc kubenswrapper[4810]: I0219 15:26:20.919385 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:26:20 crc kubenswrapper[4810]: I0219 15:26:20.925439 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-metrics-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:26:20 crc kubenswrapper[4810]: I0219 15:26:20.926250 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a6f83f3c-26f4-472f-9fcd-ae8049f1819a-webhook-certs\") pod \"openstack-operator-controller-manager-6d464797d7-lrlqc\" (UID: \"a6f83f3c-26f4-472f-9fcd-ae8049f1819a\") " pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:26:21 crc kubenswrapper[4810]: I0219 15:26:21.210005 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:26:21 crc kubenswrapper[4810]: I0219 15:26:21.404180 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc"] Feb 19 15:26:21 crc kubenswrapper[4810]: W0219 15:26:21.408251 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6f83f3c_26f4_472f_9fcd_ae8049f1819a.slice/crio-d48fa01d3b3b7599b20727651737df2e9b8508e2512f70dccccb80c680c2754c WatchSource:0}: Error finding container d48fa01d3b3b7599b20727651737df2e9b8508e2512f70dccccb80c680c2754c: Status 404 returned error can't find the container with id d48fa01d3b3b7599b20727651737df2e9b8508e2512f70dccccb80c680c2754c Feb 19 15:26:21 crc kubenswrapper[4810]: I0219 15:26:21.860229 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" event={"ID":"a6f83f3c-26f4-472f-9fcd-ae8049f1819a","Type":"ContainerStarted","Data":"d48fa01d3b3b7599b20727651737df2e9b8508e2512f70dccccb80c680c2754c"} Feb 19 15:26:24 crc kubenswrapper[4810]: I0219 15:26:24.767041 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt" Feb 19 15:26:27 crc kubenswrapper[4810]: I0219 15:26:27.920685 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" event={"ID":"a6f83f3c-26f4-472f-9fcd-ae8049f1819a","Type":"ContainerStarted","Data":"ef7cc2bef6be2daef684ae24d45cdedb1b1d9ec3d0d46d71c9f10f442ee78855"} Feb 19 15:26:28 crc kubenswrapper[4810]: I0219 15:26:28.929257 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:26:28 crc kubenswrapper[4810]: I0219 15:26:28.963281 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" podStartSLOduration=40.963263707 podStartE2EDuration="40.963263707s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:26:28.955346833 +0000 UTC m=+1018.437376957" watchObservedRunningTime="2026-02-19 15:26:28.963263707 +0000 UTC m=+1018.445293831" Feb 19 15:26:28 crc kubenswrapper[4810]: I0219 15:26:28.996974 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-jjqv2" Feb 19 15:26:29 crc kubenswrapper[4810]: I0219 15:26:29.136035 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5xnwd" Feb 19 15:26:32 crc kubenswrapper[4810]: I0219 15:26:32.964995 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" event={"ID":"4898d4eb-d474-44bc-9a38-e36f300d132f","Type":"ContainerStarted","Data":"15670e664aa26d3cb152e3c860abaa75c754a1cacb9836235b8938dc924401ff"} Feb 19 15:26:32 crc kubenswrapper[4810]: I0219 15:26:32.966420 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:26:32 crc kubenswrapper[4810]: I0219 15:26:32.968091 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq" event={"ID":"fef4ad67-ccc8-4b32-bfb8-38dd6aa8e07e","Type":"ContainerStarted","Data":"99da32fe6da3ac5f0ba83fa4c1f0667d7d2c0bfd3372978835cde4e3dea6b9d0"} Feb 19 15:26:32 crc kubenswrapper[4810]: I0219 15:26:32.968884 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq" Feb 19 15:26:33 crc kubenswrapper[4810]: I0219 15:26:33.012670 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq" podStartSLOduration=3.738657908 podStartE2EDuration="45.012645324s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:25:50.552016631 +0000 UTC m=+980.034046755" lastFinishedPulling="2026-02-19 15:26:31.826004007 +0000 UTC m=+1021.308034171" observedRunningTime="2026-02-19 15:26:33.005920299 +0000 UTC m=+1022.487950453" watchObservedRunningTime="2026-02-19 15:26:33.012645324 +0000 UTC m=+1022.494675468" Feb 19 15:26:33 crc kubenswrapper[4810]: I0219 15:26:33.014270 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" podStartSLOduration=33.804521898 podStartE2EDuration="45.014258053s" podCreationTimestamp="2026-02-19 15:25:48 +0000 UTC" firstStartedPulling="2026-02-19 15:26:20.696309644 +0000 UTC m=+1010.178339788" lastFinishedPulling="2026-02-19 15:26:31.906045819 +0000 UTC m=+1021.388075943" observedRunningTime="2026-02-19 15:26:32.986921453 +0000 UTC m=+1022.468951617" watchObservedRunningTime="2026-02-19 15:26:33.014258053 +0000 UTC m=+1022.496288197" Feb 19 15:26:38 crc kubenswrapper[4810]: I0219 15:26:38.917908 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l67cq" Feb 19 15:26:40 crc kubenswrapper[4810]: I0219 15:26:40.418985 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-2kkhl" Feb 19 15:26:41 crc kubenswrapper[4810]: I0219 15:26:41.217869 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6d464797d7-lrlqc" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.640280 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cf7b9b6b9-4h67z"] Feb 19 15:27:00 crc kubenswrapper[4810]: E0219 15:27:00.641079 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1715cc96-a86a-40d8-8f5a-1a4f35129bd1" containerName="extract-content" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.641092 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1715cc96-a86a-40d8-8f5a-1a4f35129bd1" containerName="extract-content" Feb 19 15:27:00 crc kubenswrapper[4810]: E0219 15:27:00.641112 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1715cc96-a86a-40d8-8f5a-1a4f35129bd1" containerName="extract-utilities" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.641118 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1715cc96-a86a-40d8-8f5a-1a4f35129bd1" containerName="extract-utilities" Feb 19 15:27:00 crc kubenswrapper[4810]: E0219 15:27:00.641127 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1715cc96-a86a-40d8-8f5a-1a4f35129bd1" containerName="registry-server" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.641133 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1715cc96-a86a-40d8-8f5a-1a4f35129bd1" containerName="registry-server" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.641274 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1715cc96-a86a-40d8-8f5a-1a4f35129bd1" containerName="registry-server" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.643289 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf7b9b6b9-4h67z" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.647465 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.647595 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-zxgt6" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.647668 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.647795 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.661483 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf7b9b6b9-4h67z"] Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.721819 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f48d6b889-hmnjw"] Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.723289 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.726066 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.734019 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f48d6b889-hmnjw"] Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.734443 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkxm\" (UniqueName: \"kubernetes.io/projected/de8cf274-42f1-4c36-bca7-1d622bf61898-kube-api-access-5gkxm\") pod \"dnsmasq-dns-6cf7b9b6b9-4h67z\" (UID: \"de8cf274-42f1-4c36-bca7-1d622bf61898\") " pod="openstack/dnsmasq-dns-6cf7b9b6b9-4h67z" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.734494 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de8cf274-42f1-4c36-bca7-1d622bf61898-config\") pod \"dnsmasq-dns-6cf7b9b6b9-4h67z\" (UID: \"de8cf274-42f1-4c36-bca7-1d622bf61898\") " pod="openstack/dnsmasq-dns-6cf7b9b6b9-4h67z" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.835631 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de8cf274-42f1-4c36-bca7-1d622bf61898-config\") pod \"dnsmasq-dns-6cf7b9b6b9-4h67z\" (UID: \"de8cf274-42f1-4c36-bca7-1d622bf61898\") " pod="openstack/dnsmasq-dns-6cf7b9b6b9-4h67z" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.835696 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-config\") pod \"dnsmasq-dns-5f48d6b889-hmnjw\" (UID: \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\") " pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.835740 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-dns-svc\") pod \"dnsmasq-dns-5f48d6b889-hmnjw\" (UID: \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\") " pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.835759 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slwdg\" (UniqueName: \"kubernetes.io/projected/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-kube-api-access-slwdg\") pod \"dnsmasq-dns-5f48d6b889-hmnjw\" (UID: \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\") " pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.835799 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gkxm\" (UniqueName: \"kubernetes.io/projected/de8cf274-42f1-4c36-bca7-1d622bf61898-kube-api-access-5gkxm\") pod \"dnsmasq-dns-6cf7b9b6b9-4h67z\" (UID: \"de8cf274-42f1-4c36-bca7-1d622bf61898\") " pod="openstack/dnsmasq-dns-6cf7b9b6b9-4h67z" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.836890 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de8cf274-42f1-4c36-bca7-1d622bf61898-config\") pod \"dnsmasq-dns-6cf7b9b6b9-4h67z\" (UID: \"de8cf274-42f1-4c36-bca7-1d622bf61898\") " pod="openstack/dnsmasq-dns-6cf7b9b6b9-4h67z" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.859457 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gkxm\" (UniqueName: \"kubernetes.io/projected/de8cf274-42f1-4c36-bca7-1d622bf61898-kube-api-access-5gkxm\") pod \"dnsmasq-dns-6cf7b9b6b9-4h67z\" (UID: \"de8cf274-42f1-4c36-bca7-1d622bf61898\") " pod="openstack/dnsmasq-dns-6cf7b9b6b9-4h67z" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.936808 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-config\") pod \"dnsmasq-dns-5f48d6b889-hmnjw\" (UID: \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\") " pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.936886 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-dns-svc\") pod \"dnsmasq-dns-5f48d6b889-hmnjw\" (UID: \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\") " pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.936904 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slwdg\" (UniqueName: \"kubernetes.io/projected/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-kube-api-access-slwdg\") pod \"dnsmasq-dns-5f48d6b889-hmnjw\" (UID: \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\") " pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.937767 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-config\") pod \"dnsmasq-dns-5f48d6b889-hmnjw\" (UID: \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\") " pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.937779 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-dns-svc\") pod \"dnsmasq-dns-5f48d6b889-hmnjw\" (UID: \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\") " pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.959941 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slwdg\" (UniqueName: \"kubernetes.io/projected/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-kube-api-access-slwdg\") pod \"dnsmasq-dns-5f48d6b889-hmnjw\" (UID: \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\") " pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" Feb 19 15:27:00 crc kubenswrapper[4810]: I0219 15:27:00.962612 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf7b9b6b9-4h67z" Feb 19 15:27:01 crc kubenswrapper[4810]: I0219 15:27:01.037820 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" Feb 19 15:27:01 crc kubenswrapper[4810]: I0219 15:27:01.192830 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf7b9b6b9-4h67z"] Feb 19 15:27:01 crc kubenswrapper[4810]: I0219 15:27:01.292376 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f48d6b889-hmnjw"] Feb 19 15:27:01 crc kubenswrapper[4810]: W0219 15:27:01.311011 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod731d3bd2_70ab_4ec0_b574_a00042d0b3b2.slice/crio-4b12e6f7653c91c7a79e0ac9f496ebbaccb1093a59dc2281bd92a962e34bac03 WatchSource:0}: Error finding container 4b12e6f7653c91c7a79e0ac9f496ebbaccb1093a59dc2281bd92a962e34bac03: Status 404 returned error can't find the container with id 4b12e6f7653c91c7a79e0ac9f496ebbaccb1093a59dc2281bd92a962e34bac03 Feb 19 15:27:02 crc kubenswrapper[4810]: I0219 15:27:02.215736 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf7b9b6b9-4h67z" event={"ID":"de8cf274-42f1-4c36-bca7-1d622bf61898","Type":"ContainerStarted","Data":"14a7ad043bd3a14329d8f25870e9bf1129047ecea007a33d63f7b5431ba94745"} Feb 19 15:27:02 crc kubenswrapper[4810]: I0219 15:27:02.216768 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" event={"ID":"731d3bd2-70ab-4ec0-b574-a00042d0b3b2","Type":"ContainerStarted","Data":"4b12e6f7653c91c7a79e0ac9f496ebbaccb1093a59dc2281bd92a962e34bac03"} Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.377845 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f48d6b889-hmnjw"] Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.394897 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c99647bb5-xkrgb"] Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.397218 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.407226 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c99647bb5-xkrgb"] Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.508030 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eec01803-2cfd-4e97-a1c0-216c2622e913-dns-svc\") pod \"dnsmasq-dns-c99647bb5-xkrgb\" (UID: \"eec01803-2cfd-4e97-a1c0-216c2622e913\") " pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.508452 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec01803-2cfd-4e97-a1c0-216c2622e913-config\") pod \"dnsmasq-dns-c99647bb5-xkrgb\" (UID: \"eec01803-2cfd-4e97-a1c0-216c2622e913\") " pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.508527 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx72g\" (UniqueName: \"kubernetes.io/projected/eec01803-2cfd-4e97-a1c0-216c2622e913-kube-api-access-xx72g\") pod \"dnsmasq-dns-c99647bb5-xkrgb\" (UID: \"eec01803-2cfd-4e97-a1c0-216c2622e913\") " pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.610467 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eec01803-2cfd-4e97-a1c0-216c2622e913-dns-svc\") pod \"dnsmasq-dns-c99647bb5-xkrgb\" (UID: \"eec01803-2cfd-4e97-a1c0-216c2622e913\") " pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.610600 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec01803-2cfd-4e97-a1c0-216c2622e913-config\") pod \"dnsmasq-dns-c99647bb5-xkrgb\" (UID: \"eec01803-2cfd-4e97-a1c0-216c2622e913\") " pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.610687 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx72g\" (UniqueName: \"kubernetes.io/projected/eec01803-2cfd-4e97-a1c0-216c2622e913-kube-api-access-xx72g\") pod \"dnsmasq-dns-c99647bb5-xkrgb\" (UID: \"eec01803-2cfd-4e97-a1c0-216c2622e913\") " pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.611897 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eec01803-2cfd-4e97-a1c0-216c2622e913-dns-svc\") pod \"dnsmasq-dns-c99647bb5-xkrgb\" (UID: \"eec01803-2cfd-4e97-a1c0-216c2622e913\") " pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.612609 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec01803-2cfd-4e97-a1c0-216c2622e913-config\") pod \"dnsmasq-dns-c99647bb5-xkrgb\" (UID: \"eec01803-2cfd-4e97-a1c0-216c2622e913\") " pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.638704 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx72g\" (UniqueName: \"kubernetes.io/projected/eec01803-2cfd-4e97-a1c0-216c2622e913-kube-api-access-xx72g\") pod \"dnsmasq-dns-c99647bb5-xkrgb\" (UID: \"eec01803-2cfd-4e97-a1c0-216c2622e913\") " pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.644350 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf7b9b6b9-4h67z"] Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.680102 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6598d9876f-l2z6b"] Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.681218 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.694866 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6598d9876f-l2z6b"] Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.715140 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.820780 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv7dz\" (UniqueName: \"kubernetes.io/projected/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-kube-api-access-qv7dz\") pod \"dnsmasq-dns-6598d9876f-l2z6b\" (UID: \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\") " pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.820928 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-config\") pod \"dnsmasq-dns-6598d9876f-l2z6b\" (UID: \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\") " pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.820956 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-dns-svc\") pod \"dnsmasq-dns-6598d9876f-l2z6b\" (UID: \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\") " pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.922621 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-config\") pod \"dnsmasq-dns-6598d9876f-l2z6b\" (UID: \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\") " pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.922992 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-dns-svc\") pod \"dnsmasq-dns-6598d9876f-l2z6b\" (UID: \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\") " pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.923218 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv7dz\" (UniqueName: \"kubernetes.io/projected/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-kube-api-access-qv7dz\") pod \"dnsmasq-dns-6598d9876f-l2z6b\" (UID: \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\") " pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.923636 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-config\") pod \"dnsmasq-dns-6598d9876f-l2z6b\" (UID: \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\") " pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.923670 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-dns-svc\") pod \"dnsmasq-dns-6598d9876f-l2z6b\" (UID: \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\") " pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.934557 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6598d9876f-l2z6b"] Feb 19 15:27:04 crc kubenswrapper[4810]: E0219 15:27:04.935224 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-qv7dz], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" podUID="46eb09d6-3ebe-4cb9-ac84-21cc0f203c89" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.945765 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv7dz\" (UniqueName: \"kubernetes.io/projected/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-kube-api-access-qv7dz\") pod \"dnsmasq-dns-6598d9876f-l2z6b\" (UID: \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\") " pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.967464 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdbdbc8cc-mddqt"] Feb 19 15:27:04 crc kubenswrapper[4810]: I0219 15:27:04.969124 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.005633 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbdbc8cc-mddqt"] Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.032297 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64a59633-cb6f-4631-a980-566894f0ce35-config\") pod \"dnsmasq-dns-5fdbdbc8cc-mddqt\" (UID: \"64a59633-cb6f-4631-a980-566894f0ce35\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.032374 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64a59633-cb6f-4631-a980-566894f0ce35-dns-svc\") pod \"dnsmasq-dns-5fdbdbc8cc-mddqt\" (UID: \"64a59633-cb6f-4631-a980-566894f0ce35\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.032438 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dbn6\" (UniqueName: \"kubernetes.io/projected/64a59633-cb6f-4631-a980-566894f0ce35-kube-api-access-5dbn6\") pod \"dnsmasq-dns-5fdbdbc8cc-mddqt\" (UID: \"64a59633-cb6f-4631-a980-566894f0ce35\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.133869 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64a59633-cb6f-4631-a980-566894f0ce35-config\") pod \"dnsmasq-dns-5fdbdbc8cc-mddqt\" (UID: \"64a59633-cb6f-4631-a980-566894f0ce35\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.133967 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64a59633-cb6f-4631-a980-566894f0ce35-dns-svc\") pod \"dnsmasq-dns-5fdbdbc8cc-mddqt\" (UID: \"64a59633-cb6f-4631-a980-566894f0ce35\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.134263 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dbn6\" (UniqueName: \"kubernetes.io/projected/64a59633-cb6f-4631-a980-566894f0ce35-kube-api-access-5dbn6\") pod \"dnsmasq-dns-5fdbdbc8cc-mddqt\" (UID: \"64a59633-cb6f-4631-a980-566894f0ce35\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.134926 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64a59633-cb6f-4631-a980-566894f0ce35-dns-svc\") pod \"dnsmasq-dns-5fdbdbc8cc-mddqt\" (UID: \"64a59633-cb6f-4631-a980-566894f0ce35\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.135142 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64a59633-cb6f-4631-a980-566894f0ce35-config\") pod \"dnsmasq-dns-5fdbdbc8cc-mddqt\" (UID: \"64a59633-cb6f-4631-a980-566894f0ce35\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.163161 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dbn6\" (UniqueName: \"kubernetes.io/projected/64a59633-cb6f-4631-a980-566894f0ce35-kube-api-access-5dbn6\") pod \"dnsmasq-dns-5fdbdbc8cc-mddqt\" (UID: \"64a59633-cb6f-4631-a980-566894f0ce35\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.234533 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.248446 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.305041 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.336654 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv7dz\" (UniqueName: \"kubernetes.io/projected/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-kube-api-access-qv7dz\") pod \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\" (UID: \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\") " Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.336749 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-config\") pod \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\" (UID: \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\") " Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.336839 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-dns-svc\") pod \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\" (UID: \"46eb09d6-3ebe-4cb9-ac84-21cc0f203c89\") " Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.337837 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46eb09d6-3ebe-4cb9-ac84-21cc0f203c89" (UID: "46eb09d6-3ebe-4cb9-ac84-21cc0f203c89"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.338423 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-config" (OuterVolumeSpecName: "config") pod "46eb09d6-3ebe-4cb9-ac84-21cc0f203c89" (UID: "46eb09d6-3ebe-4cb9-ac84-21cc0f203c89"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.353672 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-kube-api-access-qv7dz" (OuterVolumeSpecName: "kube-api-access-qv7dz") pod "46eb09d6-3ebe-4cb9-ac84-21cc0f203c89" (UID: "46eb09d6-3ebe-4cb9-ac84-21cc0f203c89"). InnerVolumeSpecName "kube-api-access-qv7dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.439336 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv7dz\" (UniqueName: \"kubernetes.io/projected/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-kube-api-access-qv7dz\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.439657 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.439669 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.555552 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.556812 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.560060 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.560407 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.560579 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.560722 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.560925 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-jflkp" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.561060 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.562543 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.563653 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.742939 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.743009 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.743062 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.743079 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.743100 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.743149 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56shj\" (UniqueName: \"kubernetes.io/projected/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-kube-api-access-56shj\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.743171 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.743215 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.743249 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.743291 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.743367 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.805089 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.807122 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.809509 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.809654 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.809794 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9r5f7" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.810022 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.810058 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.810031 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.810161 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.823068 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.845652 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.845707 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.845763 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.845792 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.845819 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.845890 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.845935 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.845984 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.846009 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.846046 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.846099 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56shj\" (UniqueName: \"kubernetes.io/projected/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-kube-api-access-56shj\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.846882 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.846997 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.847231 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.850901 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.851401 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.852093 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.854986 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.860449 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.867610 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.877169 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.877175 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.881408 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56shj\" (UniqueName: \"kubernetes.io/projected/4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c-kube-api-access-56shj\") pod \"notifications-rabbitmq-server-0\" (UID: \"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.883236 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.947130 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.947174 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.947203 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.947307 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.947377 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.947451 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/00bcfb03-4357-4343-99a5-30dc7f25abe9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.947588 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74vq8\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-kube-api-access-74vq8\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.947643 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/00bcfb03-4357-4343-99a5-30dc7f25abe9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.947662 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.947691 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-config-data\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:05 crc kubenswrapper[4810]: I0219 15:27:05.947722 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.051097 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74vq8\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-kube-api-access-74vq8\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.051229 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/00bcfb03-4357-4343-99a5-30dc7f25abe9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.051257 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.051747 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.052122 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-config-data\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.052238 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.052435 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.052492 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.052554 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.052594 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.052611 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.052662 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/00bcfb03-4357-4343-99a5-30dc7f25abe9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.052834 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.052976 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-config-data\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.053305 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.054141 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.055213 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.072948 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.077297 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.077882 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/00bcfb03-4357-4343-99a5-30dc7f25abe9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.090155 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.090185 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/00bcfb03-4357-4343-99a5-30dc7f25abe9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.092978 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74vq8\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-kube-api-access-74vq8\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.106403 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.108362 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.110388 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.110920 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.111057 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.111083 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.111240 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.115978 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.116160 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.116249 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qxvfg" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.130654 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.240586 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6598d9876f-l2z6b" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.256031 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtmb8\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-kube-api-access-wtmb8\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.256078 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.256127 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.256164 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.256183 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a3676ed-f06f-4dea-82a1-959716331113-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.256199 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.256216 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.256235 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.256249 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.256275 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a3676ed-f06f-4dea-82a1-959716331113-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.256303 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.278583 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6598d9876f-l2z6b"] Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.284548 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6598d9876f-l2z6b"] Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.357575 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a3676ed-f06f-4dea-82a1-959716331113-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.357625 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.357646 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.357673 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.357711 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.357767 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a3676ed-f06f-4dea-82a1-959716331113-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.357804 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.357861 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtmb8\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-kube-api-access-wtmb8\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.357891 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.357967 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.358041 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.358154 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.358485 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.358744 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.359366 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.359991 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.360949 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.361597 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a3676ed-f06f-4dea-82a1-959716331113-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.362086 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.362518 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a3676ed-f06f-4dea-82a1-959716331113-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.363715 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.384874 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.393178 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtmb8\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-kube-api-access-wtmb8\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:06 crc kubenswrapper[4810]: I0219 15:27:06.441633 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.457936 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46eb09d6-3ebe-4cb9-ac84-21cc0f203c89" path="/var/lib/kubelet/pods/46eb09d6-3ebe-4cb9-ac84-21cc0f203c89/volumes" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.513438 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.514674 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.520227 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.520467 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.520653 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.520702 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-22plv" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.526485 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.528089 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.676152 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.676216 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c0ffb8ce-a356-4416-b96c-49db30ff1947-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.676305 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrkxq\" (UniqueName: \"kubernetes.io/projected/c0ffb8ce-a356-4416-b96c-49db30ff1947-kube-api-access-qrkxq\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.676376 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ffb8ce-a356-4416-b96c-49db30ff1947-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.676399 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0ffb8ce-a356-4416-b96c-49db30ff1947-kolla-config\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.677474 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c0ffb8ce-a356-4416-b96c-49db30ff1947-config-data-default\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.677522 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0ffb8ce-a356-4416-b96c-49db30ff1947-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.677550 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ffb8ce-a356-4416-b96c-49db30ff1947-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.778981 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0ffb8ce-a356-4416-b96c-49db30ff1947-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.779069 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ffb8ce-a356-4416-b96c-49db30ff1947-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.779149 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.779197 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c0ffb8ce-a356-4416-b96c-49db30ff1947-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.779305 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrkxq\" (UniqueName: \"kubernetes.io/projected/c0ffb8ce-a356-4416-b96c-49db30ff1947-kube-api-access-qrkxq\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.779376 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ffb8ce-a356-4416-b96c-49db30ff1947-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.779401 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0ffb8ce-a356-4416-b96c-49db30ff1947-kolla-config\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.779475 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c0ffb8ce-a356-4416-b96c-49db30ff1947-config-data-default\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.781201 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c0ffb8ce-a356-4416-b96c-49db30ff1947-config-data-default\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.782625 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0ffb8ce-a356-4416-b96c-49db30ff1947-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.784614 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.784990 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c0ffb8ce-a356-4416-b96c-49db30ff1947-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.785852 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0ffb8ce-a356-4416-b96c-49db30ff1947-kolla-config\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.791981 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ffb8ce-a356-4416-b96c-49db30ff1947-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.793111 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ffb8ce-a356-4416-b96c-49db30ff1947-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.822434 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrkxq\" (UniqueName: \"kubernetes.io/projected/c0ffb8ce-a356-4416-b96c-49db30ff1947-kube-api-access-qrkxq\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.823276 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c0ffb8ce-a356-4416-b96c-49db30ff1947\") " pod="openstack/openstack-galera-0" Feb 19 15:27:07 crc kubenswrapper[4810]: I0219 15:27:07.838823 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.028747 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.032179 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.036294 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.036575 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-h5nm7" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.036857 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.037780 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.038758 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.106406 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcjf6\" (UniqueName: \"kubernetes.io/projected/30d11a24-9722-4e7a-9be5-f2bd00128167-kube-api-access-fcjf6\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.106472 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/30d11a24-9722-4e7a-9be5-f2bd00128167-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.106510 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d11a24-9722-4e7a-9be5-f2bd00128167-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.106545 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.106585 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/30d11a24-9722-4e7a-9be5-f2bd00128167-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.106648 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/30d11a24-9722-4e7a-9be5-f2bd00128167-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.106691 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/30d11a24-9722-4e7a-9be5-f2bd00128167-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.106726 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30d11a24-9722-4e7a-9be5-f2bd00128167-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.207522 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/30d11a24-9722-4e7a-9be5-f2bd00128167-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.207615 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/30d11a24-9722-4e7a-9be5-f2bd00128167-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.207660 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/30d11a24-9722-4e7a-9be5-f2bd00128167-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.207693 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30d11a24-9722-4e7a-9be5-f2bd00128167-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.207731 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcjf6\" (UniqueName: \"kubernetes.io/projected/30d11a24-9722-4e7a-9be5-f2bd00128167-kube-api-access-fcjf6\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.207758 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/30d11a24-9722-4e7a-9be5-f2bd00128167-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.207788 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d11a24-9722-4e7a-9be5-f2bd00128167-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.207820 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.208053 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.216960 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/30d11a24-9722-4e7a-9be5-f2bd00128167-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.217620 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/30d11a24-9722-4e7a-9be5-f2bd00128167-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.218321 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/30d11a24-9722-4e7a-9be5-f2bd00128167-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.218713 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30d11a24-9722-4e7a-9be5-f2bd00128167-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.223858 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/30d11a24-9722-4e7a-9be5-f2bd00128167-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.225951 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d11a24-9722-4e7a-9be5-f2bd00128167-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.249950 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcjf6\" (UniqueName: \"kubernetes.io/projected/30d11a24-9722-4e7a-9be5-f2bd00128167-kube-api-access-fcjf6\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.288563 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"30d11a24-9722-4e7a-9be5-f2bd00128167\") " pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.357967 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.385636 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.386668 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.389078 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-s95tz" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.389157 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.389633 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.396062 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.512128 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-kolla-config\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.512243 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-config-data\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.512270 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.512296 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzznx\" (UniqueName: \"kubernetes.io/projected/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-kube-api-access-hzznx\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.512517 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.614355 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.614458 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-kolla-config\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.614518 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-config-data\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.614538 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.614565 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzznx\" (UniqueName: \"kubernetes.io/projected/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-kube-api-access-hzznx\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.615710 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-kolla-config\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.616837 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-config-data\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.621406 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.624011 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.648889 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzznx\" (UniqueName: \"kubernetes.io/projected/eb773d46-7b9f-4ca4-b33c-9b800b9eafd7-kube-api-access-hzznx\") pod \"memcached-0\" (UID: \"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7\") " pod="openstack/memcached-0" Feb 19 15:27:09 crc kubenswrapper[4810]: I0219 15:27:09.701908 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 15:27:11 crc kubenswrapper[4810]: I0219 15:27:11.635526 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 15:27:11 crc kubenswrapper[4810]: I0219 15:27:11.636705 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 15:27:11 crc kubenswrapper[4810]: I0219 15:27:11.644052 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-sdx2j" Feb 19 15:27:11 crc kubenswrapper[4810]: I0219 15:27:11.648963 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 15:27:11 crc kubenswrapper[4810]: I0219 15:27:11.744740 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdrqf\" (UniqueName: \"kubernetes.io/projected/097bc4d1-5648-4607-9c49-286e4bbbe553-kube-api-access-mdrqf\") pod \"kube-state-metrics-0\" (UID: \"097bc4d1-5648-4607-9c49-286e4bbbe553\") " pod="openstack/kube-state-metrics-0" Feb 19 15:27:11 crc kubenswrapper[4810]: I0219 15:27:11.846493 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdrqf\" (UniqueName: \"kubernetes.io/projected/097bc4d1-5648-4607-9c49-286e4bbbe553-kube-api-access-mdrqf\") pod \"kube-state-metrics-0\" (UID: \"097bc4d1-5648-4607-9c49-286e4bbbe553\") " pod="openstack/kube-state-metrics-0" Feb 19 15:27:11 crc kubenswrapper[4810]: I0219 15:27:11.880382 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdrqf\" (UniqueName: \"kubernetes.io/projected/097bc4d1-5648-4607-9c49-286e4bbbe553-kube-api-access-mdrqf\") pod \"kube-state-metrics-0\" (UID: \"097bc4d1-5648-4607-9c49-286e4bbbe553\") " pod="openstack/kube-state-metrics-0" Feb 19 15:27:11 crc kubenswrapper[4810]: I0219 15:27:11.957707 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 15:27:12 crc kubenswrapper[4810]: I0219 15:27:12.958891 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:27:12 crc kubenswrapper[4810]: I0219 15:27:12.960776 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:12 crc kubenswrapper[4810]: I0219 15:27:12.963237 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-x7hn6" Feb 19 15:27:12 crc kubenswrapper[4810]: I0219 15:27:12.963500 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 15:27:12 crc kubenswrapper[4810]: I0219 15:27:12.964078 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 15:27:12 crc kubenswrapper[4810]: I0219 15:27:12.964272 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 15:27:12 crc kubenswrapper[4810]: I0219 15:27:12.964750 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 15:27:12 crc kubenswrapper[4810]: I0219 15:27:12.964979 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 15:27:12 crc kubenswrapper[4810]: I0219 15:27:12.965412 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 15:27:12 crc kubenswrapper[4810]: I0219 15:27:12.975202 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 15:27:12 crc kubenswrapper[4810]: I0219 15:27:12.984529 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.063356 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.063416 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.063454 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.063509 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-config\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.063555 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.063582 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ks5p\" (UniqueName: \"kubernetes.io/projected/a220bc57-3f31-4851-ad5c-9f61359f7de5-kube-api-access-8ks5p\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.063613 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a220bc57-3f31-4851-ad5c-9f61359f7de5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.063650 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.063672 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a220bc57-3f31-4851-ad5c-9f61359f7de5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.063702 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.164931 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-config\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.164986 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.165008 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ks5p\" (UniqueName: \"kubernetes.io/projected/a220bc57-3f31-4851-ad5c-9f61359f7de5-kube-api-access-8ks5p\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.165024 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a220bc57-3f31-4851-ad5c-9f61359f7de5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.165052 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.165068 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a220bc57-3f31-4851-ad5c-9f61359f7de5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.165298 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.165342 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.165389 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.165412 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.166229 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.166285 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.166812 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.168842 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-config\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.183182 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a220bc57-3f31-4851-ad5c-9f61359f7de5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.183201 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.183211 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.183364 4810 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.183388 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e63bc62ea909687cb5abb0c5cf8da7008d795f1441aaff1987b707a42a388027/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.186470 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ks5p\" (UniqueName: \"kubernetes.io/projected/a220bc57-3f31-4851-ad5c-9f61359f7de5-kube-api-access-8ks5p\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.191006 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a220bc57-3f31-4851-ad5c-9f61359f7de5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.212487 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"prometheus-metric-storage-0\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:13 crc kubenswrapper[4810]: I0219 15:27:13.302764 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.959533 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s5488"] Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.961384 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5488" Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.963319 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-66nlv" Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.968556 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.968667 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.985454 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5t6ds"] Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.987138 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.996756 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a4fa57b-aa00-4866-b31e-df29f7f86480-var-run-ovn\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.996827 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4a4fa57b-aa00-4866-b31e-df29f7f86480-var-log-ovn\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.996850 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4a4fa57b-aa00-4866-b31e-df29f7f86480-var-run\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.996864 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4fa57b-aa00-4866-b31e-df29f7f86480-combined-ca-bundle\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.996880 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a4fa57b-aa00-4866-b31e-df29f7f86480-ovn-controller-tls-certs\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.996903 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ngp2\" (UniqueName: \"kubernetes.io/projected/4a4fa57b-aa00-4866-b31e-df29f7f86480-kube-api-access-9ngp2\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.996922 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a4fa57b-aa00-4866-b31e-df29f7f86480-scripts\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:14 crc kubenswrapper[4810]: I0219 15:27:14.999419 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s5488"] Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.006633 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5t6ds"] Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.098979 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8jzv\" (UniqueName: \"kubernetes.io/projected/542da555-4f39-4dff-b378-5306135244db-kube-api-access-g8jzv\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099032 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4a4fa57b-aa00-4866-b31e-df29f7f86480-var-log-ovn\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099050 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/542da555-4f39-4dff-b378-5306135244db-etc-ovs\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099065 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/542da555-4f39-4dff-b378-5306135244db-scripts\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099080 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/542da555-4f39-4dff-b378-5306135244db-var-log\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099096 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4a4fa57b-aa00-4866-b31e-df29f7f86480-var-run\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099111 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4fa57b-aa00-4866-b31e-df29f7f86480-combined-ca-bundle\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099125 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a4fa57b-aa00-4866-b31e-df29f7f86480-ovn-controller-tls-certs\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099147 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ngp2\" (UniqueName: \"kubernetes.io/projected/4a4fa57b-aa00-4866-b31e-df29f7f86480-kube-api-access-9ngp2\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099169 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a4fa57b-aa00-4866-b31e-df29f7f86480-scripts\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099196 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/542da555-4f39-4dff-b378-5306135244db-var-run\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099232 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/542da555-4f39-4dff-b378-5306135244db-var-lib\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099247 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a4fa57b-aa00-4866-b31e-df29f7f86480-var-run-ovn\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099720 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a4fa57b-aa00-4866-b31e-df29f7f86480-var-run-ovn\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.099814 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4a4fa57b-aa00-4866-b31e-df29f7f86480-var-log-ovn\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.100369 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4a4fa57b-aa00-4866-b31e-df29f7f86480-var-run\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.101567 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a4fa57b-aa00-4866-b31e-df29f7f86480-scripts\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.105674 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4fa57b-aa00-4866-b31e-df29f7f86480-combined-ca-bundle\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.106478 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a4fa57b-aa00-4866-b31e-df29f7f86480-ovn-controller-tls-certs\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.119008 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ngp2\" (UniqueName: \"kubernetes.io/projected/4a4fa57b-aa00-4866-b31e-df29f7f86480-kube-api-access-9ngp2\") pod \"ovn-controller-s5488\" (UID: \"4a4fa57b-aa00-4866-b31e-df29f7f86480\") " pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.200123 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/542da555-4f39-4dff-b378-5306135244db-var-run\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.200182 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/542da555-4f39-4dff-b378-5306135244db-var-lib\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.200225 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8jzv\" (UniqueName: \"kubernetes.io/projected/542da555-4f39-4dff-b378-5306135244db-kube-api-access-g8jzv\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.200254 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/542da555-4f39-4dff-b378-5306135244db-etc-ovs\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.200272 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/542da555-4f39-4dff-b378-5306135244db-scripts\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.200287 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/542da555-4f39-4dff-b378-5306135244db-var-log\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.200484 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/542da555-4f39-4dff-b378-5306135244db-var-lib\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.200520 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/542da555-4f39-4dff-b378-5306135244db-etc-ovs\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.200620 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/542da555-4f39-4dff-b378-5306135244db-var-run\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.202122 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/542da555-4f39-4dff-b378-5306135244db-scripts\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.202245 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/542da555-4f39-4dff-b378-5306135244db-var-log\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.232365 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8jzv\" (UniqueName: \"kubernetes.io/projected/542da555-4f39-4dff-b378-5306135244db-kube-api-access-g8jzv\") pod \"ovn-controller-ovs-5t6ds\" (UID: \"542da555-4f39-4dff-b378-5306135244db\") " pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.296112 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5488" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.304303 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.314535 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.322987 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.324785 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.330187 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-pnhq5" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.330368 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.330610 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.330725 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.330923 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.341270 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.506917 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.506995 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.507024 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.507073 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.507114 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.507308 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-config\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.507488 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.507640 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9lj8\" (UniqueName: \"kubernetes.io/projected/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-kube-api-access-b9lj8\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.609259 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9lj8\" (UniqueName: \"kubernetes.io/projected/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-kube-api-access-b9lj8\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.609336 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.609385 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.609405 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.609431 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.609464 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.609485 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-config\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.609521 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.609837 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.610595 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.610983 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-config\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.611309 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.613302 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.613837 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.617563 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.629771 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.635220 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9lj8\" (UniqueName: \"kubernetes.io/projected/bdffb5e6-13bb-4c08-ad3c-52d8ded85431-kube-api-access-b9lj8\") pod \"ovsdbserver-nb-0\" (UID: \"bdffb5e6-13bb-4c08-ad3c-52d8ded85431\") " pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:15 crc kubenswrapper[4810]: I0219 15:27:15.653179 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.455058 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.461522 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.465082 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.465266 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-rtzwd" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.465398 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.467020 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.469349 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.559809 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b985124-01b7-430c-b5ea-b9fd095e5f5e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.559943 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b985124-01b7-430c-b5ea-b9fd095e5f5e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.559983 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b985124-01b7-430c-b5ea-b9fd095e5f5e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.560003 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b985124-01b7-430c-b5ea-b9fd095e5f5e-config\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.560029 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxnn4\" (UniqueName: \"kubernetes.io/projected/5b985124-01b7-430c-b5ea-b9fd095e5f5e-kube-api-access-fxnn4\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.560081 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b985124-01b7-430c-b5ea-b9fd095e5f5e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.560316 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.560398 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b985124-01b7-430c-b5ea-b9fd095e5f5e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.662756 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b985124-01b7-430c-b5ea-b9fd095e5f5e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.662835 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b985124-01b7-430c-b5ea-b9fd095e5f5e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.662860 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b985124-01b7-430c-b5ea-b9fd095e5f5e-config\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.662887 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxnn4\" (UniqueName: \"kubernetes.io/projected/5b985124-01b7-430c-b5ea-b9fd095e5f5e-kube-api-access-fxnn4\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.662918 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b985124-01b7-430c-b5ea-b9fd095e5f5e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.662973 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.662998 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b985124-01b7-430c-b5ea-b9fd095e5f5e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.663019 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b985124-01b7-430c-b5ea-b9fd095e5f5e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.664310 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.666060 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b985124-01b7-430c-b5ea-b9fd095e5f5e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.666430 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b985124-01b7-430c-b5ea-b9fd095e5f5e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.668021 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b985124-01b7-430c-b5ea-b9fd095e5f5e-config\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.673759 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b985124-01b7-430c-b5ea-b9fd095e5f5e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.678261 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b985124-01b7-430c-b5ea-b9fd095e5f5e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.681270 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b985124-01b7-430c-b5ea-b9fd095e5f5e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.681732 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxnn4\" (UniqueName: \"kubernetes.io/projected/5b985124-01b7-430c-b5ea-b9fd095e5f5e-kube-api-access-fxnn4\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.688388 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5b985124-01b7-430c-b5ea-b9fd095e5f5e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:18 crc kubenswrapper[4810]: I0219 15:27:18.795979 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:19 crc kubenswrapper[4810]: W0219 15:27:19.713611 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb773d46_7b9f_4ca4_b33c_9b800b9eafd7.slice/crio-af1bedfbf1e8ee4640f8b3f2501f92ef9cf3bd99570f284d000bc1d9845edff2 WatchSource:0}: Error finding container af1bedfbf1e8ee4640f8b3f2501f92ef9cf3bd99570f284d000bc1d9845edff2: Status 404 returned error can't find the container with id af1bedfbf1e8ee4640f8b3f2501f92ef9cf3bd99570f284d000bc1d9845edff2 Feb 19 15:27:20 crc kubenswrapper[4810]: I0219 15:27:20.050148 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 15:27:20 crc kubenswrapper[4810]: I0219 15:27:20.285658 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 15:27:20 crc kubenswrapper[4810]: I0219 15:27:20.376264 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 15:27:20 crc kubenswrapper[4810]: I0219 15:27:20.383460 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7","Type":"ContainerStarted","Data":"af1bedfbf1e8ee4640f8b3f2501f92ef9cf3bd99570f284d000bc1d9845edff2"} Feb 19 15:27:20 crc kubenswrapper[4810]: I0219 15:27:20.389022 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 15:27:20 crc kubenswrapper[4810]: I0219 15:27:20.396517 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 19 15:27:20 crc kubenswrapper[4810]: I0219 15:27:20.489440 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbdbc8cc-mddqt"] Feb 19 15:27:20 crc kubenswrapper[4810]: W0219 15:27:20.803604 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a3676ed_f06f_4dea_82a1_959716331113.slice/crio-65b3bafec6c943249f491880afc4a9c1d426f050515e89a7aeb5d3ee771259c3 WatchSource:0}: Error finding container 65b3bafec6c943249f491880afc4a9c1d426f050515e89a7aeb5d3ee771259c3: Status 404 returned error can't find the container with id 65b3bafec6c943249f491880afc4a9c1d426f050515e89a7aeb5d3ee771259c3 Feb 19 15:27:20 crc kubenswrapper[4810]: W0219 15:27:20.807289 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0ffb8ce_a356_4416_b96c_49db30ff1947.slice/crio-5113de562efb8ddfc222a7c7d8fa47d8888f899d7ace9235021e85e325a0f5e6 WatchSource:0}: Error finding container 5113de562efb8ddfc222a7c7d8fa47d8888f899d7ace9235021e85e325a0f5e6: Status 404 returned error can't find the container with id 5113de562efb8ddfc222a7c7d8fa47d8888f899d7ace9235021e85e325a0f5e6 Feb 19 15:27:20 crc kubenswrapper[4810]: E0219 15:27:20.812682 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 19 15:27:20 crc kubenswrapper[4810]: E0219 15:27:20.812727 4810 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 19 15:27:20 crc kubenswrapper[4810]: E0219 15:27:20.812847 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-slwdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f48d6b889-hmnjw_openstack(731d3bd2-70ab-4ec0-b574-a00042d0b3b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 15:27:20 crc kubenswrapper[4810]: E0219 15:27:20.814254 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" podUID="731d3bd2-70ab-4ec0-b574-a00042d0b3b2" Feb 19 15:27:20 crc kubenswrapper[4810]: E0219 15:27:20.833007 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 19 15:27:20 crc kubenswrapper[4810]: E0219 15:27:20.833058 4810 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 19 15:27:20 crc kubenswrapper[4810]: E0219 15:27:20.833164 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gkxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6cf7b9b6b9-4h67z_openstack(de8cf274-42f1-4c36-bca7-1d622bf61898): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 15:27:20 crc kubenswrapper[4810]: E0219 15:27:20.834379 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6cf7b9b6b9-4h67z" podUID="de8cf274-42f1-4c36-bca7-1d622bf61898" Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.285814 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c99647bb5-xkrgb"] Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.376312 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.385037 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.391209 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c","Type":"ContainerStarted","Data":"5c41be9999ee30bb993e3bad3c961a576efb4f2be70057a47eb8147e5881b09c"} Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.392265 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"097bc4d1-5648-4607-9c49-286e4bbbe553","Type":"ContainerStarted","Data":"7d6bd84bead9eb4536dd357117c11ae4d96e35d9a18c032d07c64f477200a6eb"} Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.393051 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"00bcfb03-4357-4343-99a5-30dc7f25abe9","Type":"ContainerStarted","Data":"438e5fcabfeda7b104ccc004754827e00367d2ec7bbb19edfefdf5cb049ee1ce"} Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.393644 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a3676ed-f06f-4dea-82a1-959716331113","Type":"ContainerStarted","Data":"65b3bafec6c943249f491880afc4a9c1d426f050515e89a7aeb5d3ee771259c3"} Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.394347 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c0ffb8ce-a356-4416-b96c-49db30ff1947","Type":"ContainerStarted","Data":"5113de562efb8ddfc222a7c7d8fa47d8888f899d7ace9235021e85e325a0f5e6"} Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.395129 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" event={"ID":"64a59633-cb6f-4631-a980-566894f0ce35","Type":"ContainerStarted","Data":"97bfa68e77f4ab8ae2a59549a783cdf08404be22506f85bef749cb5bf6fbe5cf"} Feb 19 15:27:21 crc kubenswrapper[4810]: W0219 15:27:21.649870 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec01803_2cfd_4e97_a1c0_216c2622e913.slice/crio-d66ef65309b98fdfd5a68e7c80449beb1e0f777f1d9cb1877aa1054da73700ab WatchSource:0}: Error finding container d66ef65309b98fdfd5a68e7c80449beb1e0f777f1d9cb1877aa1054da73700ab: Status 404 returned error can't find the container with id d66ef65309b98fdfd5a68e7c80449beb1e0f777f1d9cb1877aa1054da73700ab Feb 19 15:27:21 crc kubenswrapper[4810]: W0219 15:27:21.657933 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda220bc57_3f31_4851_ad5c_9f61359f7de5.slice/crio-2e7a3617c56f7d96899fe573b10e0613d97d13afc38ab2fb9c62813a642860d3 WatchSource:0}: Error finding container 2e7a3617c56f7d96899fe573b10e0613d97d13afc38ab2fb9c62813a642860d3: Status 404 returned error can't find the container with id 2e7a3617c56f7d96899fe573b10e0613d97d13afc38ab2fb9c62813a642860d3 Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.703990 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf7b9b6b9-4h67z" Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.751530 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.817688 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.824248 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-dns-svc\") pod \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\" (UID: \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\") " Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.824345 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-config\") pod \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\" (UID: \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\") " Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.824557 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de8cf274-42f1-4c36-bca7-1d622bf61898-config\") pod \"de8cf274-42f1-4c36-bca7-1d622bf61898\" (UID: \"de8cf274-42f1-4c36-bca7-1d622bf61898\") " Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.824599 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gkxm\" (UniqueName: \"kubernetes.io/projected/de8cf274-42f1-4c36-bca7-1d622bf61898-kube-api-access-5gkxm\") pod \"de8cf274-42f1-4c36-bca7-1d622bf61898\" (UID: \"de8cf274-42f1-4c36-bca7-1d622bf61898\") " Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.824639 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slwdg\" (UniqueName: \"kubernetes.io/projected/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-kube-api-access-slwdg\") pod \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\" (UID: \"731d3bd2-70ab-4ec0-b574-a00042d0b3b2\") " Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.825021 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de8cf274-42f1-4c36-bca7-1d622bf61898-config" (OuterVolumeSpecName: "config") pod "de8cf274-42f1-4c36-bca7-1d622bf61898" (UID: "de8cf274-42f1-4c36-bca7-1d622bf61898"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.825592 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-config" (OuterVolumeSpecName: "config") pod "731d3bd2-70ab-4ec0-b574-a00042d0b3b2" (UID: "731d3bd2-70ab-4ec0-b574-a00042d0b3b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.825765 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "731d3bd2-70ab-4ec0-b574-a00042d0b3b2" (UID: "731d3bd2-70ab-4ec0-b574-a00042d0b3b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.828338 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-kube-api-access-slwdg" (OuterVolumeSpecName: "kube-api-access-slwdg") pod "731d3bd2-70ab-4ec0-b574-a00042d0b3b2" (UID: "731d3bd2-70ab-4ec0-b574-a00042d0b3b2"). InnerVolumeSpecName "kube-api-access-slwdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.834563 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de8cf274-42f1-4c36-bca7-1d622bf61898-kube-api-access-5gkxm" (OuterVolumeSpecName: "kube-api-access-5gkxm") pod "de8cf274-42f1-4c36-bca7-1d622bf61898" (UID: "de8cf274-42f1-4c36-bca7-1d622bf61898"). InnerVolumeSpecName "kube-api-access-5gkxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.933545 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.933862 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de8cf274-42f1-4c36-bca7-1d622bf61898-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.933876 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gkxm\" (UniqueName: \"kubernetes.io/projected/de8cf274-42f1-4c36-bca7-1d622bf61898-kube-api-access-5gkxm\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.933887 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slwdg\" (UniqueName: \"kubernetes.io/projected/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-kube-api-access-slwdg\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:21 crc kubenswrapper[4810]: I0219 15:27:21.933895 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/731d3bd2-70ab-4ec0-b574-a00042d0b3b2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.080147 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s5488"] Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.403446 4810 generic.go:334] "Generic (PLEG): container finished" podID="64a59633-cb6f-4631-a980-566894f0ce35" containerID="1d10f5e352636a23ee8d873911fea2dc7821da75356614e2daef6d4813ea231e" exitCode=0 Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.403509 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" event={"ID":"64a59633-cb6f-4631-a980-566894f0ce35","Type":"ContainerDied","Data":"1d10f5e352636a23ee8d873911fea2dc7821da75356614e2daef6d4813ea231e"} Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.405184 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s5488" event={"ID":"4a4fa57b-aa00-4866-b31e-df29f7f86480","Type":"ContainerStarted","Data":"b2c5a51b8f9fcfed99cc3efa2e88a6ae77828e6048de9d2b7bed0303f6313484"} Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.406584 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a220bc57-3f31-4851-ad5c-9f61359f7de5","Type":"ContainerStarted","Data":"2e7a3617c56f7d96899fe573b10e0613d97d13afc38ab2fb9c62813a642860d3"} Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.408408 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eb773d46-7b9f-4ca4-b33c-9b800b9eafd7","Type":"ContainerStarted","Data":"956da0f3b539a4118dc7e177ffd26a630d9ad9bfed5f3294c24c5821c882bc03"} Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.408526 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.409723 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"30d11a24-9722-4e7a-9be5-f2bd00128167","Type":"ContainerStarted","Data":"463dc22f7557701fb1c6bdef718f136b22b3fd0a388a2348e4f6c48650f5916a"} Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.411105 4810 generic.go:334] "Generic (PLEG): container finished" podID="eec01803-2cfd-4e97-a1c0-216c2622e913" containerID="ed90cdf99f920236e6f409e68476a3cd86bdc892ec839211cda663eab550964c" exitCode=0 Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.411184 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" event={"ID":"eec01803-2cfd-4e97-a1c0-216c2622e913","Type":"ContainerDied","Data":"ed90cdf99f920236e6f409e68476a3cd86bdc892ec839211cda663eab550964c"} Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.411203 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" event={"ID":"eec01803-2cfd-4e97-a1c0-216c2622e913","Type":"ContainerStarted","Data":"d66ef65309b98fdfd5a68e7c80449beb1e0f777f1d9cb1877aa1054da73700ab"} Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.412293 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf7b9b6b9-4h67z" event={"ID":"de8cf274-42f1-4c36-bca7-1d622bf61898","Type":"ContainerDied","Data":"14a7ad043bd3a14329d8f25870e9bf1129047ecea007a33d63f7b5431ba94745"} Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.412356 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf7b9b6b9-4h67z" Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.413621 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.413675 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f48d6b889-hmnjw" event={"ID":"731d3bd2-70ab-4ec0-b574-a00042d0b3b2","Type":"ContainerDied","Data":"4b12e6f7653c91c7a79e0ac9f496ebbaccb1093a59dc2281bd92a962e34bac03"} Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.414707 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5b985124-01b7-430c-b5ea-b9fd095e5f5e","Type":"ContainerStarted","Data":"54a122a98e003d93dcaba0ff09a6daeac31c1f44f71a905a667894948962b204"} Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.461545 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=11.410523754 podStartE2EDuration="13.461526548s" podCreationTimestamp="2026-02-19 15:27:09 +0000 UTC" firstStartedPulling="2026-02-19 15:27:19.735310706 +0000 UTC m=+1069.217340870" lastFinishedPulling="2026-02-19 15:27:21.78631354 +0000 UTC m=+1071.268343664" observedRunningTime="2026-02-19 15:27:22.444611644 +0000 UTC m=+1071.926641768" watchObservedRunningTime="2026-02-19 15:27:22.461526548 +0000 UTC m=+1071.943556672" Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.507159 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f48d6b889-hmnjw"] Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.518381 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f48d6b889-hmnjw"] Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.531502 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf7b9b6b9-4h67z"] Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.536869 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cf7b9b6b9-4h67z"] Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.745521 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5t6ds"] Feb 19 15:27:22 crc kubenswrapper[4810]: I0219 15:27:22.850728 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 15:27:22 crc kubenswrapper[4810]: W0219 15:27:22.875772 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdffb5e6_13bb_4c08_ad3c_52d8ded85431.slice/crio-e3c4b05e0858c37dce877a21389cd22932bcfb83ffb5f5f09b3a089cd54a1b1c WatchSource:0}: Error finding container e3c4b05e0858c37dce877a21389cd22932bcfb83ffb5f5f09b3a089cd54a1b1c: Status 404 returned error can't find the container with id e3c4b05e0858c37dce877a21389cd22932bcfb83ffb5f5f09b3a089cd54a1b1c Feb 19 15:27:23 crc kubenswrapper[4810]: I0219 15:27:23.426027 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bdffb5e6-13bb-4c08-ad3c-52d8ded85431","Type":"ContainerStarted","Data":"e3c4b05e0858c37dce877a21389cd22932bcfb83ffb5f5f09b3a089cd54a1b1c"} Feb 19 15:27:23 crc kubenswrapper[4810]: I0219 15:27:23.427493 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5t6ds" event={"ID":"542da555-4f39-4dff-b378-5306135244db","Type":"ContainerStarted","Data":"f40eb961af62f3727d6983a9058db071af156eff93cc9928b5cf97c5866909ab"} Feb 19 15:27:23 crc kubenswrapper[4810]: I0219 15:27:23.449121 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="731d3bd2-70ab-4ec0-b574-a00042d0b3b2" path="/var/lib/kubelet/pods/731d3bd2-70ab-4ec0-b574-a00042d0b3b2/volumes" Feb 19 15:27:23 crc kubenswrapper[4810]: I0219 15:27:23.449660 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de8cf274-42f1-4c36-bca7-1d622bf61898" path="/var/lib/kubelet/pods/de8cf274-42f1-4c36-bca7-1d622bf61898/volumes" Feb 19 15:27:27 crc kubenswrapper[4810]: I0219 15:27:27.809700 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-tbt28"] Feb 19 15:27:27 crc kubenswrapper[4810]: I0219 15:27:27.813572 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:27 crc kubenswrapper[4810]: I0219 15:27:27.822858 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 15:27:27 crc kubenswrapper[4810]: I0219 15:27:27.825999 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tbt28"] Feb 19 15:27:27 crc kubenswrapper[4810]: I0219 15:27:27.930268 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c03aad2b-8ca1-4310-8c11-3287fafcd66f-config\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:27 crc kubenswrapper[4810]: I0219 15:27:27.930311 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c03aad2b-8ca1-4310-8c11-3287fafcd66f-ovs-rundir\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:27 crc kubenswrapper[4810]: I0219 15:27:27.930353 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c03aad2b-8ca1-4310-8c11-3287fafcd66f-ovn-rundir\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:27 crc kubenswrapper[4810]: I0219 15:27:27.930371 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khbhr\" (UniqueName: \"kubernetes.io/projected/c03aad2b-8ca1-4310-8c11-3287fafcd66f-kube-api-access-khbhr\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:27 crc kubenswrapper[4810]: I0219 15:27:27.930404 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03aad2b-8ca1-4310-8c11-3287fafcd66f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:27 crc kubenswrapper[4810]: I0219 15:27:27.930425 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03aad2b-8ca1-4310-8c11-3287fafcd66f-combined-ca-bundle\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:27 crc kubenswrapper[4810]: I0219 15:27:27.995446 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbdbc8cc-mddqt"] Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.039080 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c03aad2b-8ca1-4310-8c11-3287fafcd66f-config\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.039135 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c03aad2b-8ca1-4310-8c11-3287fafcd66f-ovs-rundir\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.039174 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c03aad2b-8ca1-4310-8c11-3287fafcd66f-ovn-rundir\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.039201 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khbhr\" (UniqueName: \"kubernetes.io/projected/c03aad2b-8ca1-4310-8c11-3287fafcd66f-kube-api-access-khbhr\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.039237 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03aad2b-8ca1-4310-8c11-3287fafcd66f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.039274 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03aad2b-8ca1-4310-8c11-3287fafcd66f-combined-ca-bundle\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.039600 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c03aad2b-8ca1-4310-8c11-3287fafcd66f-ovn-rundir\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.039657 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c03aad2b-8ca1-4310-8c11-3287fafcd66f-ovs-rundir\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.040124 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c03aad2b-8ca1-4310-8c11-3287fafcd66f-config\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.049412 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03aad2b-8ca1-4310-8c11-3287fafcd66f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.051505 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bdb65675-l5l8x"] Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.054079 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.061029 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bdb65675-l5l8x"] Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.065606 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.066689 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khbhr\" (UniqueName: \"kubernetes.io/projected/c03aad2b-8ca1-4310-8c11-3287fafcd66f-kube-api-access-khbhr\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.073995 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03aad2b-8ca1-4310-8c11-3287fafcd66f-combined-ca-bundle\") pod \"ovn-controller-metrics-tbt28\" (UID: \"c03aad2b-8ca1-4310-8c11-3287fafcd66f\") " pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.140764 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-dns-svc\") pod \"dnsmasq-dns-58bdb65675-l5l8x\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.140846 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-ovsdbserver-nb\") pod \"dnsmasq-dns-58bdb65675-l5l8x\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.140907 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w28xm\" (UniqueName: \"kubernetes.io/projected/cef0e580-4d18-480e-a57d-9c1b31405cd8-kube-api-access-w28xm\") pod \"dnsmasq-dns-58bdb65675-l5l8x\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.140960 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-config\") pod \"dnsmasq-dns-58bdb65675-l5l8x\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.147073 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tbt28" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.210169 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c99647bb5-xkrgb"] Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.236186 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8ff78869-24szl"] Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.237631 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.242377 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-dns-svc\") pod \"dnsmasq-dns-58bdb65675-l5l8x\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.242460 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-ovsdbserver-nb\") pod \"dnsmasq-dns-58bdb65675-l5l8x\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.242526 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w28xm\" (UniqueName: \"kubernetes.io/projected/cef0e580-4d18-480e-a57d-9c1b31405cd8-kube-api-access-w28xm\") pod \"dnsmasq-dns-58bdb65675-l5l8x\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.242579 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-config\") pod \"dnsmasq-dns-58bdb65675-l5l8x\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.243208 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-dns-svc\") pod \"dnsmasq-dns-58bdb65675-l5l8x\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.243555 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-config\") pod \"dnsmasq-dns-58bdb65675-l5l8x\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.243763 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-ovsdbserver-nb\") pod \"dnsmasq-dns-58bdb65675-l5l8x\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.246364 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.262908 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8ff78869-24szl"] Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.276278 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w28xm\" (UniqueName: \"kubernetes.io/projected/cef0e580-4d18-480e-a57d-9c1b31405cd8-kube-api-access-w28xm\") pod \"dnsmasq-dns-58bdb65675-l5l8x\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.343830 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-config\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.343889 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcqld\" (UniqueName: \"kubernetes.io/projected/5886c70b-ea09-4a9e-9c31-1689d32735a5-kube-api-access-hcqld\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.343924 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-dns-svc\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.344109 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.344240 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.446653 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-dns-svc\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.446736 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.446789 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.446820 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-config\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.446850 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcqld\" (UniqueName: \"kubernetes.io/projected/5886c70b-ea09-4a9e-9c31-1689d32735a5-kube-api-access-hcqld\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.448804 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.450644 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-config\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.453125 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.453868 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-dns-svc\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.454367 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.464621 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcqld\" (UniqueName: \"kubernetes.io/projected/5886c70b-ea09-4a9e-9c31-1689d32735a5-kube-api-access-hcqld\") pod \"dnsmasq-dns-6f8ff78869-24szl\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:28 crc kubenswrapper[4810]: I0219 15:27:28.560944 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:29 crc kubenswrapper[4810]: I0219 15:27:29.704016 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.141645 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8ff78869-24szl"] Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.148139 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bdb65675-l5l8x"] Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.263644 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tbt28"] Feb 19 15:27:31 crc kubenswrapper[4810]: W0219 15:27:31.264256 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5886c70b_ea09_4a9e_9c31_1689d32735a5.slice/crio-bb8c011e783721ee6fc9b0c30d19bca5636e6fa4d84780893c5d094407e82425 WatchSource:0}: Error finding container bb8c011e783721ee6fc9b0c30d19bca5636e6fa4d84780893c5d094407e82425: Status 404 returned error can't find the container with id bb8c011e783721ee6fc9b0c30d19bca5636e6fa4d84780893c5d094407e82425 Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.510604 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" event={"ID":"64a59633-cb6f-4631-a980-566894f0ce35","Type":"ContainerStarted","Data":"bcd810514d656b151586b085915c58b159ffd3f83f716b74f5c945866a1aa802"} Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.510910 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" podUID="64a59633-cb6f-4631-a980-566894f0ce35" containerName="dnsmasq-dns" containerID="cri-o://bcd810514d656b151586b085915c58b159ffd3f83f716b74f5c945866a1aa802" gracePeriod=10 Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.511174 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.520760 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8ff78869-24szl" event={"ID":"5886c70b-ea09-4a9e-9c31-1689d32735a5","Type":"ContainerStarted","Data":"bb8c011e783721ee6fc9b0c30d19bca5636e6fa4d84780893c5d094407e82425"} Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.525648 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" event={"ID":"cef0e580-4d18-480e-a57d-9c1b31405cd8","Type":"ContainerStarted","Data":"91a11d1dc7d57506c1c71e77acb0aff16fbbb92753ce7889b5cb70d83d2c4e29"} Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.529578 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tbt28" event={"ID":"c03aad2b-8ca1-4310-8c11-3287fafcd66f","Type":"ContainerStarted","Data":"398ce092968fcf75e595ec5e07bca0c0bfb7f0b37aac1aed5f14c676bdbdecfb"} Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.650351 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" podStartSLOduration=26.66434251 podStartE2EDuration="27.650317654s" podCreationTimestamp="2026-02-19 15:27:04 +0000 UTC" firstStartedPulling="2026-02-19 15:27:20.802297564 +0000 UTC m=+1070.284327718" lastFinishedPulling="2026-02-19 15:27:21.788272738 +0000 UTC m=+1071.270302862" observedRunningTime="2026-02-19 15:27:31.647852104 +0000 UTC m=+1081.129882238" watchObservedRunningTime="2026-02-19 15:27:31.650317654 +0000 UTC m=+1081.132347768" Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.929986 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bdb65675-l5l8x"] Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.966380 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d95ff5b97-flxcw"] Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.968119 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:31 crc kubenswrapper[4810]: I0219 15:27:31.994576 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d95ff5b97-flxcw"] Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.013467 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d7lt\" (UniqueName: \"kubernetes.io/projected/788aae13-b274-4965-ac0c-8ac075c32567-kube-api-access-9d7lt\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.013554 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-dns-svc\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.013604 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-ovsdbserver-sb\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.013656 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-ovsdbserver-nb\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.013719 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-config\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.116124 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d7lt\" (UniqueName: \"kubernetes.io/projected/788aae13-b274-4965-ac0c-8ac075c32567-kube-api-access-9d7lt\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.116200 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-dns-svc\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.116245 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-ovsdbserver-sb\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.116292 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-ovsdbserver-nb\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.116362 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-config\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.117398 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-dns-svc\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.118136 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-config\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.118755 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-ovsdbserver-sb\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.118852 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-ovsdbserver-nb\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.189162 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d7lt\" (UniqueName: \"kubernetes.io/projected/788aae13-b274-4965-ac0c-8ac075c32567-kube-api-access-9d7lt\") pod \"dnsmasq-dns-7d95ff5b97-flxcw\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.304979 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.558133 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bdffb5e6-13bb-4c08-ad3c-52d8ded85431","Type":"ContainerStarted","Data":"9c08e680a1b3ab0d4a9d3191480c060f2f4e5575377e753fd3d6cfb4cd55a63e"} Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.560155 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c0ffb8ce-a356-4416-b96c-49db30ff1947","Type":"ContainerStarted","Data":"bb49108e1faa12136cbb1db0e6f8a1e4d5a337b3554af6b9133dd5428aa7a353"} Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.572239 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"30d11a24-9722-4e7a-9be5-f2bd00128167","Type":"ContainerStarted","Data":"1591fb62b46d0cb1a40e18641c32aabc2ceca62bd58d6c739d0fd39dbad2c5c9"} Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.577414 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" podUID="eec01803-2cfd-4e97-a1c0-216c2622e913" containerName="dnsmasq-dns" containerID="cri-o://7dff482418537b07f37dc2275d46ba72695f3b6461cc224e2cd000dd183e1d25" gracePeriod=10 Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.577631 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" event={"ID":"eec01803-2cfd-4e97-a1c0-216c2622e913","Type":"ContainerStarted","Data":"7dff482418537b07f37dc2275d46ba72695f3b6461cc224e2cd000dd183e1d25"} Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.577667 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.594922 4810 generic.go:334] "Generic (PLEG): container finished" podID="64a59633-cb6f-4631-a980-566894f0ce35" containerID="bcd810514d656b151586b085915c58b159ffd3f83f716b74f5c945866a1aa802" exitCode=0 Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.594982 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" event={"ID":"64a59633-cb6f-4631-a980-566894f0ce35","Type":"ContainerDied","Data":"bcd810514d656b151586b085915c58b159ffd3f83f716b74f5c945866a1aa802"} Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.595005 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" event={"ID":"64a59633-cb6f-4631-a980-566894f0ce35","Type":"ContainerDied","Data":"97bfa68e77f4ab8ae2a59549a783cdf08404be22506f85bef749cb5bf6fbe5cf"} Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.595014 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97bfa68e77f4ab8ae2a59549a783cdf08404be22506f85bef749cb5bf6fbe5cf" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.614666 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5b985124-01b7-430c-b5ea-b9fd095e5f5e","Type":"ContainerStarted","Data":"0f36e0c185a145dde7c4e8351f98756404400d361d9734a38f2e7fead03e8f24"} Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.643197 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" podStartSLOduration=28.476877247 podStartE2EDuration="28.643181897s" podCreationTimestamp="2026-02-19 15:27:04 +0000 UTC" firstStartedPulling="2026-02-19 15:27:21.659834665 +0000 UTC m=+1071.141864789" lastFinishedPulling="2026-02-19 15:27:21.826139325 +0000 UTC m=+1071.308169439" observedRunningTime="2026-02-19 15:27:32.604113241 +0000 UTC m=+1082.086143375" watchObservedRunningTime="2026-02-19 15:27:32.643181897 +0000 UTC m=+1082.125212021" Feb 19 15:27:32 crc kubenswrapper[4810]: I0219 15:27:32.925734 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d95ff5b97-flxcw"] Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.064424 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.082621 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.088372 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.088747 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.090778 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.101487 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-5lxcx" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.111723 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.242881 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/37e2af25-5b30-4fb9-801e-f4a84d665540-cache\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.242943 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e2af25-5b30-4fb9-801e-f4a84d665540-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.243130 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.243307 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kjdb\" (UniqueName: \"kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-kube-api-access-7kjdb\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.243376 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.243425 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/37e2af25-5b30-4fb9-801e-f4a84d665540-lock\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.345250 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kjdb\" (UniqueName: \"kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-kube-api-access-7kjdb\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.345305 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.345345 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/37e2af25-5b30-4fb9-801e-f4a84d665540-lock\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.345365 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/37e2af25-5b30-4fb9-801e-f4a84d665540-cache\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.345397 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e2af25-5b30-4fb9-801e-f4a84d665540-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.345441 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: E0219 15:27:33.345576 4810 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 15:27:33 crc kubenswrapper[4810]: E0219 15:27:33.345590 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 15:27:33 crc kubenswrapper[4810]: E0219 15:27:33.345631 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift podName:37e2af25-5b30-4fb9-801e-f4a84d665540 nodeName:}" failed. No retries permitted until 2026-02-19 15:27:33.845615861 +0000 UTC m=+1083.327645985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift") pod "swift-storage-0" (UID: "37e2af25-5b30-4fb9-801e-f4a84d665540") : configmap "swift-ring-files" not found Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.346135 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.347149 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/37e2af25-5b30-4fb9-801e-f4a84d665540-cache\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.347471 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/37e2af25-5b30-4fb9-801e-f4a84d665540-lock\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.362525 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kjdb\" (UniqueName: \"kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-kube-api-access-7kjdb\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.369690 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.390432 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e2af25-5b30-4fb9-801e-f4a84d665540-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.625289 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5t6ds" event={"ID":"542da555-4f39-4dff-b378-5306135244db","Type":"ContainerStarted","Data":"937bbc860401565ad8af8a788c4abb62132c4199e86ece180aec57a1dc21c966"} Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.627443 4810 generic.go:334] "Generic (PLEG): container finished" podID="5886c70b-ea09-4a9e-9c31-1689d32735a5" containerID="c07f2dcf7043fcd524ebc329815b7a17ca72dbefb72ca802efe8c1da736fd8ad" exitCode=0 Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.627496 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8ff78869-24szl" event={"ID":"5886c70b-ea09-4a9e-9c31-1689d32735a5","Type":"ContainerDied","Data":"c07f2dcf7043fcd524ebc329815b7a17ca72dbefb72ca802efe8c1da736fd8ad"} Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.631675 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s5488" event={"ID":"4a4fa57b-aa00-4866-b31e-df29f7f86480","Type":"ContainerStarted","Data":"5763f40a55ec61c17824f9bf1cef4535f925e85cabd705fc9224e22229d2a90f"} Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.631827 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-s5488" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.633732 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"00bcfb03-4357-4343-99a5-30dc7f25abe9","Type":"ContainerStarted","Data":"5f65c0deba7b3077c5501137f00e319288d66ec1245a0e431539e6d1d5d3d67c"} Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.637693 4810 generic.go:334] "Generic (PLEG): container finished" podID="cef0e580-4d18-480e-a57d-9c1b31405cd8" containerID="e98ee022771e694a97d33efc6f076045128cb6c9031e6599d9309671e11fc1de" exitCode=0 Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.637920 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" event={"ID":"cef0e580-4d18-480e-a57d-9c1b31405cd8","Type":"ContainerDied","Data":"e98ee022771e694a97d33efc6f076045128cb6c9031e6599d9309671e11fc1de"} Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.641064 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a3676ed-f06f-4dea-82a1-959716331113","Type":"ContainerStarted","Data":"d8e4585cb9bc557ddc8d8dc697ba266361d794bec388e11f6081c9ec077a1726"} Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.643936 4810 generic.go:334] "Generic (PLEG): container finished" podID="eec01803-2cfd-4e97-a1c0-216c2622e913" containerID="7dff482418537b07f37dc2275d46ba72695f3b6461cc224e2cd000dd183e1d25" exitCode=0 Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.643998 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" event={"ID":"eec01803-2cfd-4e97-a1c0-216c2622e913","Type":"ContainerDied","Data":"7dff482418537b07f37dc2275d46ba72695f3b6461cc224e2cd000dd183e1d25"} Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.645870 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c","Type":"ContainerStarted","Data":"781b07acf23d18cc10631b3a01aa0eb27d1e62e7e3cfc8db109ba3d58b915ff0"} Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.652185 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"097bc4d1-5648-4607-9c49-286e4bbbe553","Type":"ContainerStarted","Data":"35654a2e76fc1f65be05a171e2aeec58c5e73e3b78c5850da9115db247aae94f"} Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.652511 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.754705 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-s5488" podStartSLOduration=11.269515073 podStartE2EDuration="19.754686715s" podCreationTimestamp="2026-02-19 15:27:14 +0000 UTC" firstStartedPulling="2026-02-19 15:27:22.086856806 +0000 UTC m=+1071.568886930" lastFinishedPulling="2026-02-19 15:27:30.572028448 +0000 UTC m=+1080.054058572" observedRunningTime="2026-02-19 15:27:33.744435344 +0000 UTC m=+1083.226465468" watchObservedRunningTime="2026-02-19 15:27:33.754686715 +0000 UTC m=+1083.236716849" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.828718 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.286684607 podStartE2EDuration="22.828698907s" podCreationTimestamp="2026-02-19 15:27:11 +0000 UTC" firstStartedPulling="2026-02-19 15:27:20.81520985 +0000 UTC m=+1070.297239974" lastFinishedPulling="2026-02-19 15:27:31.35722415 +0000 UTC m=+1080.839254274" observedRunningTime="2026-02-19 15:27:33.827032006 +0000 UTC m=+1083.309062130" watchObservedRunningTime="2026-02-19 15:27:33.828698907 +0000 UTC m=+1083.310729031" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.855526 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:33 crc kubenswrapper[4810]: E0219 15:27:33.855738 4810 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 15:27:33 crc kubenswrapper[4810]: E0219 15:27:33.855765 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 15:27:33 crc kubenswrapper[4810]: E0219 15:27:33.855817 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift podName:37e2af25-5b30-4fb9-801e-f4a84d665540 nodeName:}" failed. No retries permitted until 2026-02-19 15:27:34.85579849 +0000 UTC m=+1084.337828614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift") pod "swift-storage-0" (UID: "37e2af25-5b30-4fb9-801e-f4a84d665540") : configmap "swift-ring-files" not found Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.869086 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.956758 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64a59633-cb6f-4631-a980-566894f0ce35-dns-svc\") pod \"64a59633-cb6f-4631-a980-566894f0ce35\" (UID: \"64a59633-cb6f-4631-a980-566894f0ce35\") " Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.956843 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64a59633-cb6f-4631-a980-566894f0ce35-config\") pod \"64a59633-cb6f-4631-a980-566894f0ce35\" (UID: \"64a59633-cb6f-4631-a980-566894f0ce35\") " Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.956953 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dbn6\" (UniqueName: \"kubernetes.io/projected/64a59633-cb6f-4631-a980-566894f0ce35-kube-api-access-5dbn6\") pod \"64a59633-cb6f-4631-a980-566894f0ce35\" (UID: \"64a59633-cb6f-4631-a980-566894f0ce35\") " Feb 19 15:27:33 crc kubenswrapper[4810]: I0219 15:27:33.976905 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a59633-cb6f-4631-a980-566894f0ce35-kube-api-access-5dbn6" (OuterVolumeSpecName: "kube-api-access-5dbn6") pod "64a59633-cb6f-4631-a980-566894f0ce35" (UID: "64a59633-cb6f-4631-a980-566894f0ce35"). InnerVolumeSpecName "kube-api-access-5dbn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:27:34 crc kubenswrapper[4810]: I0219 15:27:34.023655 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64a59633-cb6f-4631-a980-566894f0ce35-config" (OuterVolumeSpecName: "config") pod "64a59633-cb6f-4631-a980-566894f0ce35" (UID: "64a59633-cb6f-4631-a980-566894f0ce35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:34 crc kubenswrapper[4810]: I0219 15:27:34.029647 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64a59633-cb6f-4631-a980-566894f0ce35-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64a59633-cb6f-4631-a980-566894f0ce35" (UID: "64a59633-cb6f-4631-a980-566894f0ce35"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:34 crc kubenswrapper[4810]: I0219 15:27:34.059364 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dbn6\" (UniqueName: \"kubernetes.io/projected/64a59633-cb6f-4631-a980-566894f0ce35-kube-api-access-5dbn6\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:34 crc kubenswrapper[4810]: I0219 15:27:34.059398 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64a59633-cb6f-4631-a980-566894f0ce35-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:34 crc kubenswrapper[4810]: I0219 15:27:34.059412 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64a59633-cb6f-4631-a980-566894f0ce35-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:34 crc kubenswrapper[4810]: I0219 15:27:34.660682 4810 generic.go:334] "Generic (PLEG): container finished" podID="542da555-4f39-4dff-b378-5306135244db" containerID="937bbc860401565ad8af8a788c4abb62132c4199e86ece180aec57a1dc21c966" exitCode=0 Feb 19 15:27:34 crc kubenswrapper[4810]: I0219 15:27:34.660798 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5t6ds" event={"ID":"542da555-4f39-4dff-b378-5306135244db","Type":"ContainerDied","Data":"937bbc860401565ad8af8a788c4abb62132c4199e86ece180aec57a1dc21c966"} Feb 19 15:27:34 crc kubenswrapper[4810]: I0219 15:27:34.660945 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbdbc8cc-mddqt" Feb 19 15:27:34 crc kubenswrapper[4810]: I0219 15:27:34.700767 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbdbc8cc-mddqt"] Feb 19 15:27:34 crc kubenswrapper[4810]: I0219 15:27:34.706065 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdbdbc8cc-mddqt"] Feb 19 15:27:34 crc kubenswrapper[4810]: I0219 15:27:34.874677 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:34 crc kubenswrapper[4810]: E0219 15:27:34.874946 4810 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 15:27:34 crc kubenswrapper[4810]: E0219 15:27:34.874989 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 15:27:34 crc kubenswrapper[4810]: E0219 15:27:34.875067 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift podName:37e2af25-5b30-4fb9-801e-f4a84d665540 nodeName:}" failed. No retries permitted until 2026-02-19 15:27:36.875043079 +0000 UTC m=+1086.357073213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift") pod "swift-storage-0" (UID: "37e2af25-5b30-4fb9-801e-f4a84d665540") : configmap "swift-ring-files" not found Feb 19 15:27:35 crc kubenswrapper[4810]: I0219 15:27:35.476490 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a59633-cb6f-4631-a980-566894f0ce35" path="/var/lib/kubelet/pods/64a59633-cb6f-4631-a980-566894f0ce35/volumes" Feb 19 15:27:35 crc kubenswrapper[4810]: I0219 15:27:35.672913 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a220bc57-3f31-4851-ad5c-9f61359f7de5","Type":"ContainerStarted","Data":"024e6cbe9539fc73096ba26de873a5cd3591fc2cd12d0be8dd23110aaa2f3ec6"} Feb 19 15:27:35 crc kubenswrapper[4810]: W0219 15:27:35.743948 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod788aae13_b274_4965_ac0c_8ac075c32567.slice/crio-7cd4fcbb128b34896f354e231a1de0a145b80016e5c0592bfefbc4361bfadbaf WatchSource:0}: Error finding container 7cd4fcbb128b34896f354e231a1de0a145b80016e5c0592bfefbc4361bfadbaf: Status 404 returned error can't find the container with id 7cd4fcbb128b34896f354e231a1de0a145b80016e5c0592bfefbc4361bfadbaf Feb 19 15:27:35 crc kubenswrapper[4810]: I0219 15:27:35.975660 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.111481 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-ovsdbserver-nb\") pod \"cef0e580-4d18-480e-a57d-9c1b31405cd8\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.111768 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w28xm\" (UniqueName: \"kubernetes.io/projected/cef0e580-4d18-480e-a57d-9c1b31405cd8-kube-api-access-w28xm\") pod \"cef0e580-4d18-480e-a57d-9c1b31405cd8\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.111793 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-dns-svc\") pod \"cef0e580-4d18-480e-a57d-9c1b31405cd8\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.111821 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-config\") pod \"cef0e580-4d18-480e-a57d-9c1b31405cd8\" (UID: \"cef0e580-4d18-480e-a57d-9c1b31405cd8\") " Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.136185 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef0e580-4d18-480e-a57d-9c1b31405cd8-kube-api-access-w28xm" (OuterVolumeSpecName: "kube-api-access-w28xm") pod "cef0e580-4d18-480e-a57d-9c1b31405cd8" (UID: "cef0e580-4d18-480e-a57d-9c1b31405cd8"). InnerVolumeSpecName "kube-api-access-w28xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.167076 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-config" (OuterVolumeSpecName: "config") pod "cef0e580-4d18-480e-a57d-9c1b31405cd8" (UID: "cef0e580-4d18-480e-a57d-9c1b31405cd8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.168265 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.213702 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w28xm\" (UniqueName: \"kubernetes.io/projected/cef0e580-4d18-480e-a57d-9c1b31405cd8-kube-api-access-w28xm\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.213735 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.237113 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cef0e580-4d18-480e-a57d-9c1b31405cd8" (UID: "cef0e580-4d18-480e-a57d-9c1b31405cd8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.238247 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cef0e580-4d18-480e-a57d-9c1b31405cd8" (UID: "cef0e580-4d18-480e-a57d-9c1b31405cd8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.314612 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx72g\" (UniqueName: \"kubernetes.io/projected/eec01803-2cfd-4e97-a1c0-216c2622e913-kube-api-access-xx72g\") pod \"eec01803-2cfd-4e97-a1c0-216c2622e913\" (UID: \"eec01803-2cfd-4e97-a1c0-216c2622e913\") " Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.314740 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eec01803-2cfd-4e97-a1c0-216c2622e913-dns-svc\") pod \"eec01803-2cfd-4e97-a1c0-216c2622e913\" (UID: \"eec01803-2cfd-4e97-a1c0-216c2622e913\") " Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.314815 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec01803-2cfd-4e97-a1c0-216c2622e913-config\") pod \"eec01803-2cfd-4e97-a1c0-216c2622e913\" (UID: \"eec01803-2cfd-4e97-a1c0-216c2622e913\") " Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.315265 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.315283 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cef0e580-4d18-480e-a57d-9c1b31405cd8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.317989 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eec01803-2cfd-4e97-a1c0-216c2622e913-kube-api-access-xx72g" (OuterVolumeSpecName: "kube-api-access-xx72g") pod "eec01803-2cfd-4e97-a1c0-216c2622e913" (UID: "eec01803-2cfd-4e97-a1c0-216c2622e913"). InnerVolumeSpecName "kube-api-access-xx72g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.350984 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eec01803-2cfd-4e97-a1c0-216c2622e913-config" (OuterVolumeSpecName: "config") pod "eec01803-2cfd-4e97-a1c0-216c2622e913" (UID: "eec01803-2cfd-4e97-a1c0-216c2622e913"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.355873 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eec01803-2cfd-4e97-a1c0-216c2622e913-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eec01803-2cfd-4e97-a1c0-216c2622e913" (UID: "eec01803-2cfd-4e97-a1c0-216c2622e913"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.416478 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx72g\" (UniqueName: \"kubernetes.io/projected/eec01803-2cfd-4e97-a1c0-216c2622e913-kube-api-access-xx72g\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.416510 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eec01803-2cfd-4e97-a1c0-216c2622e913-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.416519 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec01803-2cfd-4e97-a1c0-216c2622e913-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.681673 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.681672 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bdb65675-l5l8x" event={"ID":"cef0e580-4d18-480e-a57d-9c1b31405cd8","Type":"ContainerDied","Data":"91a11d1dc7d57506c1c71e77acb0aff16fbbb92753ce7889b5cb70d83d2c4e29"} Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.682176 4810 scope.go:117] "RemoveContainer" containerID="e98ee022771e694a97d33efc6f076045128cb6c9031e6599d9309671e11fc1de" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.683798 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8ff78869-24szl" event={"ID":"5886c70b-ea09-4a9e-9c31-1689d32735a5","Type":"ContainerStarted","Data":"19026669f4751c718dc10aae0046e726cc204eeb831deb5f5b82ecfb1a0aaee7"} Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.684455 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.687477 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tbt28" event={"ID":"c03aad2b-8ca1-4310-8c11-3287fafcd66f","Type":"ContainerStarted","Data":"b33e056e143dced9b19d9021affc6bb61a726b19dfe75b0845c5af53ec92139d"} Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.688941 4810 generic.go:334] "Generic (PLEG): container finished" podID="788aae13-b274-4965-ac0c-8ac075c32567" containerID="28ca2a964c46491b21497e9a884496f03f8a4889da795bda4b42a303c67c82ef" exitCode=0 Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.689361 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" event={"ID":"788aae13-b274-4965-ac0c-8ac075c32567","Type":"ContainerDied","Data":"28ca2a964c46491b21497e9a884496f03f8a4889da795bda4b42a303c67c82ef"} Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.689395 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" event={"ID":"788aae13-b274-4965-ac0c-8ac075c32567","Type":"ContainerStarted","Data":"7cd4fcbb128b34896f354e231a1de0a145b80016e5c0592bfefbc4361bfadbaf"} Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.692233 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5t6ds" event={"ID":"542da555-4f39-4dff-b378-5306135244db","Type":"ContainerStarted","Data":"355c2244899b75d0a7bc914f26eb40d584199f86ae7a4abd29d74085bc3b9c7e"} Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.692276 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5t6ds" event={"ID":"542da555-4f39-4dff-b378-5306135244db","Type":"ContainerStarted","Data":"72c55a37bf97149620687bdab13c5b4f11850a60206da90ebbca63c3bf960c47"} Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.692490 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.692521 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.694628 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.694761 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c99647bb5-xkrgb" event={"ID":"eec01803-2cfd-4e97-a1c0-216c2622e913","Type":"ContainerDied","Data":"d66ef65309b98fdfd5a68e7c80449beb1e0f777f1d9cb1877aa1054da73700ab"} Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.697027 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5b985124-01b7-430c-b5ea-b9fd095e5f5e","Type":"ContainerStarted","Data":"f6b673831621afe1ea35404595fcddba7c186e5deeb3bc51c84aea830e676504"} Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.704433 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bdffb5e6-13bb-4c08-ad3c-52d8ded85431","Type":"ContainerStarted","Data":"81218b34ce9b45a147e3a1e08b2d64be46ede41bc685571b49c6521a842f04cb"} Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.706890 4810 scope.go:117] "RemoveContainer" containerID="7dff482418537b07f37dc2275d46ba72695f3b6461cc224e2cd000dd183e1d25" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.709361 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f8ff78869-24szl" podStartSLOduration=8.709345489 podStartE2EDuration="8.709345489s" podCreationTimestamp="2026-02-19 15:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:27:36.706271004 +0000 UTC m=+1086.188301148" watchObservedRunningTime="2026-02-19 15:27:36.709345489 +0000 UTC m=+1086.191375613" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.760510 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5t6ds" podStartSLOduration=15.003856223 podStartE2EDuration="22.760490301s" podCreationTimestamp="2026-02-19 15:27:14 +0000 UTC" firstStartedPulling="2026-02-19 15:27:22.787892817 +0000 UTC m=+1072.269922941" lastFinishedPulling="2026-02-19 15:27:30.544526885 +0000 UTC m=+1080.026557019" observedRunningTime="2026-02-19 15:27:36.757392005 +0000 UTC m=+1086.239422139" watchObservedRunningTime="2026-02-19 15:27:36.760490301 +0000 UTC m=+1086.242520425" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.781178 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.750807656 podStartE2EDuration="22.781160587s" podCreationTimestamp="2026-02-19 15:27:14 +0000 UTC" firstStartedPulling="2026-02-19 15:27:22.879833467 +0000 UTC m=+1072.361863591" lastFinishedPulling="2026-02-19 15:27:35.910186388 +0000 UTC m=+1085.392216522" observedRunningTime="2026-02-19 15:27:36.778350218 +0000 UTC m=+1086.260380342" watchObservedRunningTime="2026-02-19 15:27:36.781160587 +0000 UTC m=+1086.263190711" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.796510 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.797239 4810 scope.go:117] "RemoveContainer" containerID="ed90cdf99f920236e6f409e68476a3cd86bdc892ec839211cda663eab550964c" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.828888 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.784395041 podStartE2EDuration="19.828866775s" podCreationTimestamp="2026-02-19 15:27:17 +0000 UTC" firstStartedPulling="2026-02-19 15:27:21.873650908 +0000 UTC m=+1071.355681032" lastFinishedPulling="2026-02-19 15:27:35.918122622 +0000 UTC m=+1085.400152766" observedRunningTime="2026-02-19 15:27:36.801617178 +0000 UTC m=+1086.283647302" watchObservedRunningTime="2026-02-19 15:27:36.828866775 +0000 UTC m=+1086.310896899" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.848399 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bdb65675-l5l8x"] Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.851049 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bdb65675-l5l8x"] Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.872474 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.874576 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-tbt28" podStartSLOduration=5.3011395629999996 podStartE2EDuration="9.874565453s" podCreationTimestamp="2026-02-19 15:27:27 +0000 UTC" firstStartedPulling="2026-02-19 15:27:31.300186423 +0000 UTC m=+1080.782216547" lastFinishedPulling="2026-02-19 15:27:35.873612323 +0000 UTC m=+1085.355642437" observedRunningTime="2026-02-19 15:27:36.874499192 +0000 UTC m=+1086.356529316" watchObservedRunningTime="2026-02-19 15:27:36.874565453 +0000 UTC m=+1086.356595567" Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.926136 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:36 crc kubenswrapper[4810]: E0219 15:27:36.926540 4810 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 15:27:36 crc kubenswrapper[4810]: E0219 15:27:36.926553 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 15:27:36 crc kubenswrapper[4810]: E0219 15:27:36.926595 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift podName:37e2af25-5b30-4fb9-801e-f4a84d665540 nodeName:}" failed. No retries permitted until 2026-02-19 15:27:40.926581157 +0000 UTC m=+1090.408611281 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift") pod "swift-storage-0" (UID: "37e2af25-5b30-4fb9-801e-f4a84d665540") : configmap "swift-ring-files" not found Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.936093 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c99647bb5-xkrgb"] Feb 19 15:27:36 crc kubenswrapper[4810]: I0219 15:27:36.947291 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c99647bb5-xkrgb"] Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.028595 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wlh5p"] Feb 19 15:27:37 crc kubenswrapper[4810]: E0219 15:27:37.028968 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a59633-cb6f-4631-a980-566894f0ce35" containerName="dnsmasq-dns" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.028984 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a59633-cb6f-4631-a980-566894f0ce35" containerName="dnsmasq-dns" Feb 19 15:27:37 crc kubenswrapper[4810]: E0219 15:27:37.029006 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a59633-cb6f-4631-a980-566894f0ce35" containerName="init" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.029012 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a59633-cb6f-4631-a980-566894f0ce35" containerName="init" Feb 19 15:27:37 crc kubenswrapper[4810]: E0219 15:27:37.029022 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef0e580-4d18-480e-a57d-9c1b31405cd8" containerName="init" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.029029 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef0e580-4d18-480e-a57d-9c1b31405cd8" containerName="init" Feb 19 15:27:37 crc kubenswrapper[4810]: E0219 15:27:37.029038 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec01803-2cfd-4e97-a1c0-216c2622e913" containerName="init" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.029044 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec01803-2cfd-4e97-a1c0-216c2622e913" containerName="init" Feb 19 15:27:37 crc kubenswrapper[4810]: E0219 15:27:37.029071 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec01803-2cfd-4e97-a1c0-216c2622e913" containerName="dnsmasq-dns" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.029076 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec01803-2cfd-4e97-a1c0-216c2622e913" containerName="dnsmasq-dns" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.029245 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec01803-2cfd-4e97-a1c0-216c2622e913" containerName="dnsmasq-dns" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.029258 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef0e580-4d18-480e-a57d-9c1b31405cd8" containerName="init" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.029268 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a59633-cb6f-4631-a980-566894f0ce35" containerName="dnsmasq-dns" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.029975 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.032611 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.033355 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.034255 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.062775 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-wlh5p"] Feb 19 15:27:37 crc kubenswrapper[4810]: E0219 15:27:37.063306 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-cnh57 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-wlh5p" podUID="e49c1739-432b-445b-87d5-904af08961e4" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.072213 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-hrdll"] Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.073536 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.084179 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hrdll"] Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.098599 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-wlh5p"] Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.129402 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-combined-ca-bundle\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.129441 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-swiftconf\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.129460 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e49c1739-432b-445b-87d5-904af08961e4-scripts\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.129484 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnh57\" (UniqueName: \"kubernetes.io/projected/e49c1739-432b-445b-87d5-904af08961e4-kube-api-access-cnh57\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.129503 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e49c1739-432b-445b-87d5-904af08961e4-etc-swift\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.129546 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e49c1739-432b-445b-87d5-904af08961e4-ring-data-devices\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.129792 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-dispersionconf\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.231387 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-combined-ca-bundle\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.231437 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-swiftconf\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.231457 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e49c1739-432b-445b-87d5-904af08961e4-scripts\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.231484 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6c36f3e5-f790-4eda-9486-174f8624dad1-ring-data-devices\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.231506 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnh57\" (UniqueName: \"kubernetes.io/projected/e49c1739-432b-445b-87d5-904af08961e4-kube-api-access-cnh57\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.231562 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e49c1739-432b-445b-87d5-904af08961e4-etc-swift\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.231581 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-swiftconf\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.231665 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e49c1739-432b-445b-87d5-904af08961e4-ring-data-devices\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.231729 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c36f3e5-f790-4eda-9486-174f8624dad1-scripts\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.231754 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-dispersionconf\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.231837 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-dispersionconf\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.231919 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-combined-ca-bundle\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.231992 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fqq2\" (UniqueName: \"kubernetes.io/projected/6c36f3e5-f790-4eda-9486-174f8624dad1-kube-api-access-4fqq2\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.232023 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6c36f3e5-f790-4eda-9486-174f8624dad1-etc-swift\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.232279 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e49c1739-432b-445b-87d5-904af08961e4-scripts\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.232700 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e49c1739-432b-445b-87d5-904af08961e4-etc-swift\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.232912 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e49c1739-432b-445b-87d5-904af08961e4-ring-data-devices\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.237505 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-dispersionconf\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.240151 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-combined-ca-bundle\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.250578 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnh57\" (UniqueName: \"kubernetes.io/projected/e49c1739-432b-445b-87d5-904af08961e4-kube-api-access-cnh57\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.251144 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-swiftconf\") pod \"swift-ring-rebalance-wlh5p\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.333422 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c36f3e5-f790-4eda-9486-174f8624dad1-scripts\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.333473 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-dispersionconf\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.333569 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-combined-ca-bundle\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.333596 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fqq2\" (UniqueName: \"kubernetes.io/projected/6c36f3e5-f790-4eda-9486-174f8624dad1-kube-api-access-4fqq2\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.333620 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6c36f3e5-f790-4eda-9486-174f8624dad1-etc-swift\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.333664 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6c36f3e5-f790-4eda-9486-174f8624dad1-ring-data-devices\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.333691 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-swiftconf\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.334511 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6c36f3e5-f790-4eda-9486-174f8624dad1-etc-swift\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.334722 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c36f3e5-f790-4eda-9486-174f8624dad1-scripts\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.334764 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6c36f3e5-f790-4eda-9486-174f8624dad1-ring-data-devices\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.336754 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-swiftconf\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.337521 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-dispersionconf\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.337819 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-combined-ca-bundle\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.353847 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fqq2\" (UniqueName: \"kubernetes.io/projected/6c36f3e5-f790-4eda-9486-174f8624dad1-kube-api-access-4fqq2\") pod \"swift-ring-rebalance-hrdll\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.403734 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.451269 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cef0e580-4d18-480e-a57d-9c1b31405cd8" path="/var/lib/kubelet/pods/cef0e580-4d18-480e-a57d-9c1b31405cd8/volumes" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.452023 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eec01803-2cfd-4e97-a1c0-216c2622e913" path="/var/lib/kubelet/pods/eec01803-2cfd-4e97-a1c0-216c2622e913/volumes" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.713805 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" event={"ID":"788aae13-b274-4965-ac0c-8ac075c32567","Type":"ContainerStarted","Data":"e901006a072b768e5b2fffb51122f3ae9427a4d8bce325b789cb52d5dc2df384"} Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.714208 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.716631 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.718088 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.730202 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.759395 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" podStartSLOduration=6.759377712 podStartE2EDuration="6.759377712s" podCreationTimestamp="2026-02-19 15:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:27:37.738088501 +0000 UTC m=+1087.220118645" watchObservedRunningTime="2026-02-19 15:27:37.759377712 +0000 UTC m=+1087.241407836" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.762476 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.842393 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e49c1739-432b-445b-87d5-904af08961e4-scripts\") pod \"e49c1739-432b-445b-87d5-904af08961e4\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.842547 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-dispersionconf\") pod \"e49c1739-432b-445b-87d5-904af08961e4\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.842605 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-swiftconf\") pod \"e49c1739-432b-445b-87d5-904af08961e4\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.842639 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e49c1739-432b-445b-87d5-904af08961e4-etc-swift\") pod \"e49c1739-432b-445b-87d5-904af08961e4\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.842670 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-combined-ca-bundle\") pod \"e49c1739-432b-445b-87d5-904af08961e4\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.842691 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e49c1739-432b-445b-87d5-904af08961e4-ring-data-devices\") pod \"e49c1739-432b-445b-87d5-904af08961e4\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.842745 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnh57\" (UniqueName: \"kubernetes.io/projected/e49c1739-432b-445b-87d5-904af08961e4-kube-api-access-cnh57\") pod \"e49c1739-432b-445b-87d5-904af08961e4\" (UID: \"e49c1739-432b-445b-87d5-904af08961e4\") " Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.843780 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49c1739-432b-445b-87d5-904af08961e4-scripts" (OuterVolumeSpecName: "scripts") pod "e49c1739-432b-445b-87d5-904af08961e4" (UID: "e49c1739-432b-445b-87d5-904af08961e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.845605 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49c1739-432b-445b-87d5-904af08961e4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e49c1739-432b-445b-87d5-904af08961e4" (UID: "e49c1739-432b-445b-87d5-904af08961e4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.845666 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e49c1739-432b-445b-87d5-904af08961e4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e49c1739-432b-445b-87d5-904af08961e4" (UID: "e49c1739-432b-445b-87d5-904af08961e4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.849460 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e49c1739-432b-445b-87d5-904af08961e4" (UID: "e49c1739-432b-445b-87d5-904af08961e4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.852099 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e49c1739-432b-445b-87d5-904af08961e4" (UID: "e49c1739-432b-445b-87d5-904af08961e4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.852417 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e49c1739-432b-445b-87d5-904af08961e4" (UID: "e49c1739-432b-445b-87d5-904af08961e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.856393 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49c1739-432b-445b-87d5-904af08961e4-kube-api-access-cnh57" (OuterVolumeSpecName: "kube-api-access-cnh57") pod "e49c1739-432b-445b-87d5-904af08961e4" (UID: "e49c1739-432b-445b-87d5-904af08961e4"). InnerVolumeSpecName "kube-api-access-cnh57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.896369 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hrdll"] Feb 19 15:27:37 crc kubenswrapper[4810]: W0219 15:27:37.907781 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c36f3e5_f790_4eda_9486_174f8624dad1.slice/crio-d58b7703650c05a6b82d2a1f00bf308198af4dbc385a1e16d43b232960df6c3b WatchSource:0}: Error finding container d58b7703650c05a6b82d2a1f00bf308198af4dbc385a1e16d43b232960df6c3b: Status 404 returned error can't find the container with id d58b7703650c05a6b82d2a1f00bf308198af4dbc385a1e16d43b232960df6c3b Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.945130 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.945168 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.945181 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e49c1739-432b-445b-87d5-904af08961e4-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.945191 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e49c1739-432b-445b-87d5-904af08961e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.945202 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e49c1739-432b-445b-87d5-904af08961e4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.945213 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnh57\" (UniqueName: \"kubernetes.io/projected/e49c1739-432b-445b-87d5-904af08961e4-kube-api-access-cnh57\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:37 crc kubenswrapper[4810]: I0219 15:27:37.945224 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e49c1739-432b-445b-87d5-904af08961e4-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:38 crc kubenswrapper[4810]: I0219 15:27:38.752246 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wlh5p" Feb 19 15:27:38 crc kubenswrapper[4810]: I0219 15:27:38.752344 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hrdll" event={"ID":"6c36f3e5-f790-4eda-9486-174f8624dad1","Type":"ContainerStarted","Data":"d58b7703650c05a6b82d2a1f00bf308198af4dbc385a1e16d43b232960df6c3b"} Feb 19 15:27:38 crc kubenswrapper[4810]: I0219 15:27:38.822691 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-wlh5p"] Feb 19 15:27:38 crc kubenswrapper[4810]: I0219 15:27:38.835031 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-wlh5p"] Feb 19 15:27:39 crc kubenswrapper[4810]: I0219 15:27:39.448726 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e49c1739-432b-445b-87d5-904af08961e4" path="/var/lib/kubelet/pods/e49c1739-432b-445b-87d5-904af08961e4/volumes" Feb 19 15:27:39 crc kubenswrapper[4810]: I0219 15:27:39.653802 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:39 crc kubenswrapper[4810]: I0219 15:27:39.744798 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:39 crc kubenswrapper[4810]: I0219 15:27:39.764952 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:39 crc kubenswrapper[4810]: I0219 15:27:39.805495 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 15:27:39 crc kubenswrapper[4810]: I0219 15:27:39.983764 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 15:27:39 crc kubenswrapper[4810]: I0219 15:27:39.985210 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 15:27:39 crc kubenswrapper[4810]: I0219 15:27:39.989189 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 15:27:39 crc kubenswrapper[4810]: I0219 15:27:39.989523 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-f89wk" Feb 19 15:27:39 crc kubenswrapper[4810]: I0219 15:27:39.989590 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 15:27:39 crc kubenswrapper[4810]: I0219 15:27:39.989766 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.009670 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.083277 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22facf67-088b-410b-986a-c9e09b3d8feb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.083392 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22facf67-088b-410b-986a-c9e09b3d8feb-scripts\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.083421 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8675\" (UniqueName: \"kubernetes.io/projected/22facf67-088b-410b-986a-c9e09b3d8feb-kube-api-access-p8675\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.083585 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22facf67-088b-410b-986a-c9e09b3d8feb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.083628 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22facf67-088b-410b-986a-c9e09b3d8feb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.083655 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22facf67-088b-410b-986a-c9e09b3d8feb-config\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.083690 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22facf67-088b-410b-986a-c9e09b3d8feb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.185462 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22facf67-088b-410b-986a-c9e09b3d8feb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.185561 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22facf67-088b-410b-986a-c9e09b3d8feb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.185619 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22facf67-088b-410b-986a-c9e09b3d8feb-scripts\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.185645 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8675\" (UniqueName: \"kubernetes.io/projected/22facf67-088b-410b-986a-c9e09b3d8feb-kube-api-access-p8675\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.185721 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22facf67-088b-410b-986a-c9e09b3d8feb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.185755 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22facf67-088b-410b-986a-c9e09b3d8feb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.185778 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22facf67-088b-410b-986a-c9e09b3d8feb-config\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.186841 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22facf67-088b-410b-986a-c9e09b3d8feb-scripts\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.187164 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22facf67-088b-410b-986a-c9e09b3d8feb-config\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.193421 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22facf67-088b-410b-986a-c9e09b3d8feb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.193790 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22facf67-088b-410b-986a-c9e09b3d8feb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.194412 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22facf67-088b-410b-986a-c9e09b3d8feb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.195258 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22facf67-088b-410b-986a-c9e09b3d8feb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.219942 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8675\" (UniqueName: \"kubernetes.io/projected/22facf67-088b-410b-986a-c9e09b3d8feb-kube-api-access-p8675\") pod \"ovn-northd-0\" (UID: \"22facf67-088b-410b-986a-c9e09b3d8feb\") " pod="openstack/ovn-northd-0" Feb 19 15:27:40 crc kubenswrapper[4810]: I0219 15:27:40.328925 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 15:27:41 crc kubenswrapper[4810]: I0219 15:27:40.999828 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:41 crc kubenswrapper[4810]: E0219 15:27:41.000156 4810 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 15:27:41 crc kubenswrapper[4810]: E0219 15:27:41.000205 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 15:27:41 crc kubenswrapper[4810]: E0219 15:27:41.000301 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift podName:37e2af25-5b30-4fb9-801e-f4a84d665540 nodeName:}" failed. No retries permitted until 2026-02-19 15:27:49.000272383 +0000 UTC m=+1098.482302507 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift") pod "swift-storage-0" (UID: "37e2af25-5b30-4fb9-801e-f4a84d665540") : configmap "swift-ring-files" not found Feb 19 15:27:41 crc kubenswrapper[4810]: I0219 15:27:41.781287 4810 generic.go:334] "Generic (PLEG): container finished" podID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerID="024e6cbe9539fc73096ba26de873a5cd3591fc2cd12d0be8dd23110aaa2f3ec6" exitCode=0 Feb 19 15:27:41 crc kubenswrapper[4810]: I0219 15:27:41.781363 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a220bc57-3f31-4851-ad5c-9f61359f7de5","Type":"ContainerDied","Data":"024e6cbe9539fc73096ba26de873a5cd3591fc2cd12d0be8dd23110aaa2f3ec6"} Feb 19 15:27:41 crc kubenswrapper[4810]: I0219 15:27:41.963888 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 15:27:42 crc kubenswrapper[4810]: I0219 15:27:42.307736 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:27:42 crc kubenswrapper[4810]: I0219 15:27:42.369129 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8ff78869-24szl"] Feb 19 15:27:42 crc kubenswrapper[4810]: I0219 15:27:42.369486 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f8ff78869-24szl" podUID="5886c70b-ea09-4a9e-9c31-1689d32735a5" containerName="dnsmasq-dns" containerID="cri-o://19026669f4751c718dc10aae0046e726cc204eeb831deb5f5b82ecfb1a0aaee7" gracePeriod=10 Feb 19 15:27:42 crc kubenswrapper[4810]: I0219 15:27:42.370472 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:42 crc kubenswrapper[4810]: I0219 15:27:42.810540 4810 generic.go:334] "Generic (PLEG): container finished" podID="5886c70b-ea09-4a9e-9c31-1689d32735a5" containerID="19026669f4751c718dc10aae0046e726cc204eeb831deb5f5b82ecfb1a0aaee7" exitCode=0 Feb 19 15:27:42 crc kubenswrapper[4810]: I0219 15:27:42.810632 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8ff78869-24szl" event={"ID":"5886c70b-ea09-4a9e-9c31-1689d32735a5","Type":"ContainerDied","Data":"19026669f4751c718dc10aae0046e726cc204eeb831deb5f5b82ecfb1a0aaee7"} Feb 19 15:27:42 crc kubenswrapper[4810]: I0219 15:27:42.812733 4810 generic.go:334] "Generic (PLEG): container finished" podID="c0ffb8ce-a356-4416-b96c-49db30ff1947" containerID="bb49108e1faa12136cbb1db0e6f8a1e4d5a337b3554af6b9133dd5428aa7a353" exitCode=0 Feb 19 15:27:42 crc kubenswrapper[4810]: I0219 15:27:42.812789 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c0ffb8ce-a356-4416-b96c-49db30ff1947","Type":"ContainerDied","Data":"bb49108e1faa12136cbb1db0e6f8a1e4d5a337b3554af6b9133dd5428aa7a353"} Feb 19 15:27:42 crc kubenswrapper[4810]: I0219 15:27:42.834163 4810 generic.go:334] "Generic (PLEG): container finished" podID="30d11a24-9722-4e7a-9be5-f2bd00128167" containerID="1591fb62b46d0cb1a40e18641c32aabc2ceca62bd58d6c739d0fd39dbad2c5c9" exitCode=0 Feb 19 15:27:42 crc kubenswrapper[4810]: I0219 15:27:42.834748 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"30d11a24-9722-4e7a-9be5-f2bd00128167","Type":"ContainerDied","Data":"1591fb62b46d0cb1a40e18641c32aabc2ceca62bd58d6c739d0fd39dbad2c5c9"} Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.026041 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.136639 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-dns-svc\") pod \"5886c70b-ea09-4a9e-9c31-1689d32735a5\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.136980 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-ovsdbserver-nb\") pod \"5886c70b-ea09-4a9e-9c31-1689d32735a5\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.137036 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcqld\" (UniqueName: \"kubernetes.io/projected/5886c70b-ea09-4a9e-9c31-1689d32735a5-kube-api-access-hcqld\") pod \"5886c70b-ea09-4a9e-9c31-1689d32735a5\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.137060 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-config\") pod \"5886c70b-ea09-4a9e-9c31-1689d32735a5\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.137126 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-ovsdbserver-sb\") pod \"5886c70b-ea09-4a9e-9c31-1689d32735a5\" (UID: \"5886c70b-ea09-4a9e-9c31-1689d32735a5\") " Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.141676 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5886c70b-ea09-4a9e-9c31-1689d32735a5-kube-api-access-hcqld" (OuterVolumeSpecName: "kube-api-access-hcqld") pod "5886c70b-ea09-4a9e-9c31-1689d32735a5" (UID: "5886c70b-ea09-4a9e-9c31-1689d32735a5"). InnerVolumeSpecName "kube-api-access-hcqld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.170081 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5886c70b-ea09-4a9e-9c31-1689d32735a5" (UID: "5886c70b-ea09-4a9e-9c31-1689d32735a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.174991 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-config" (OuterVolumeSpecName: "config") pod "5886c70b-ea09-4a9e-9c31-1689d32735a5" (UID: "5886c70b-ea09-4a9e-9c31-1689d32735a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.177462 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5886c70b-ea09-4a9e-9c31-1689d32735a5" (UID: "5886c70b-ea09-4a9e-9c31-1689d32735a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.181768 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5886c70b-ea09-4a9e-9c31-1689d32735a5" (UID: "5886c70b-ea09-4a9e-9c31-1689d32735a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.238936 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.238976 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcqld\" (UniqueName: \"kubernetes.io/projected/5886c70b-ea09-4a9e-9c31-1689d32735a5-kube-api-access-hcqld\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.238991 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.239003 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.239016 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5886c70b-ea09-4a9e-9c31-1689d32735a5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.239921 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.848263 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c0ffb8ce-a356-4416-b96c-49db30ff1947","Type":"ContainerStarted","Data":"fddcd1d4e34f6a024682b173f600a45601e51b7ec78c7cf77a5412fe627b7032"} Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.850897 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"30d11a24-9722-4e7a-9be5-f2bd00128167","Type":"ContainerStarted","Data":"a246b7cef810a6f3a4206ac755dc815cc11f710c1574bd45357a8cc029f8ed2d"} Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.853987 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hrdll" event={"ID":"6c36f3e5-f790-4eda-9486-174f8624dad1","Type":"ContainerStarted","Data":"6446836f423aff91810aae3fb429a61dc4d7160b1082cc50f6cb50a21f6642d5"} Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.856683 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8ff78869-24szl" event={"ID":"5886c70b-ea09-4a9e-9c31-1689d32735a5","Type":"ContainerDied","Data":"bb8c011e783721ee6fc9b0c30d19bca5636e6fa4d84780893c5d094407e82425"} Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.856715 4810 scope.go:117] "RemoveContainer" containerID="19026669f4751c718dc10aae0046e726cc204eeb831deb5f5b82ecfb1a0aaee7" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.856731 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8ff78869-24szl" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.859547 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22facf67-088b-410b-986a-c9e09b3d8feb","Type":"ContainerStarted","Data":"47d0d3cf1f575b1c29b0c72f9905fc06df7cfecb4af25e3bbbf84ebdd832cd17"} Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.897785 4810 scope.go:117] "RemoveContainer" containerID="c07f2dcf7043fcd524ebc329815b7a17ca72dbefb72ca802efe8c1da736fd8ad" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.899827 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.53512423 podStartE2EDuration="37.899804708s" podCreationTimestamp="2026-02-19 15:27:06 +0000 UTC" firstStartedPulling="2026-02-19 15:27:20.81438485 +0000 UTC m=+1070.296414974" lastFinishedPulling="2026-02-19 15:27:30.179065328 +0000 UTC m=+1079.661095452" observedRunningTime="2026-02-19 15:27:43.876854356 +0000 UTC m=+1093.358884480" watchObservedRunningTime="2026-02-19 15:27:43.899804708 +0000 UTC m=+1093.381834832" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.909220 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8ff78869-24szl"] Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.918032 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8ff78869-24szl"] Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.927512 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-hrdll" podStartSLOduration=2.087536813 podStartE2EDuration="6.927496956s" podCreationTimestamp="2026-02-19 15:27:37 +0000 UTC" firstStartedPulling="2026-02-19 15:27:37.910240005 +0000 UTC m=+1087.392270129" lastFinishedPulling="2026-02-19 15:27:42.750200148 +0000 UTC m=+1092.232230272" observedRunningTime="2026-02-19 15:27:43.913999855 +0000 UTC m=+1093.396029979" watchObservedRunningTime="2026-02-19 15:27:43.927496956 +0000 UTC m=+1093.409527080" Feb 19 15:27:43 crc kubenswrapper[4810]: I0219 15:27:43.946956 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.043421211 podStartE2EDuration="35.946936352s" podCreationTimestamp="2026-02-19 15:27:08 +0000 UTC" firstStartedPulling="2026-02-19 15:27:21.66861939 +0000 UTC m=+1071.150649514" lastFinishedPulling="2026-02-19 15:27:30.572134511 +0000 UTC m=+1080.054164655" observedRunningTime="2026-02-19 15:27:43.937965472 +0000 UTC m=+1093.419995596" watchObservedRunningTime="2026-02-19 15:27:43.946936352 +0000 UTC m=+1093.428966476" Feb 19 15:27:44 crc kubenswrapper[4810]: I0219 15:27:44.871436 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22facf67-088b-410b-986a-c9e09b3d8feb","Type":"ContainerStarted","Data":"24e82aa2aad86302ff6a11c62bab36d7447048d609cf0f7fd7a7e16cd1fe62d8"} Feb 19 15:27:44 crc kubenswrapper[4810]: I0219 15:27:44.871681 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22facf67-088b-410b-986a-c9e09b3d8feb","Type":"ContainerStarted","Data":"b7ff00e8773edef7e35c5a8773e5619fc0d76a58227679370dd65bafea439c9d"} Feb 19 15:27:44 crc kubenswrapper[4810]: I0219 15:27:44.871699 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 15:27:44 crc kubenswrapper[4810]: I0219 15:27:44.907264 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.390618022 podStartE2EDuration="5.907244248s" podCreationTimestamp="2026-02-19 15:27:39 +0000 UTC" firstStartedPulling="2026-02-19 15:27:43.240378346 +0000 UTC m=+1092.722408470" lastFinishedPulling="2026-02-19 15:27:43.757004572 +0000 UTC m=+1093.239034696" observedRunningTime="2026-02-19 15:27:44.895288686 +0000 UTC m=+1094.377318810" watchObservedRunningTime="2026-02-19 15:27:44.907244248 +0000 UTC m=+1094.389274372" Feb 19 15:27:45 crc kubenswrapper[4810]: I0219 15:27:45.453054 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5886c70b-ea09-4a9e-9c31-1689d32735a5" path="/var/lib/kubelet/pods/5886c70b-ea09-4a9e-9c31-1689d32735a5/volumes" Feb 19 15:27:47 crc kubenswrapper[4810]: I0219 15:27:47.839719 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 15:27:47 crc kubenswrapper[4810]: I0219 15:27:47.839994 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 15:27:49 crc kubenswrapper[4810]: I0219 15:27:49.082192 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:27:49 crc kubenswrapper[4810]: E0219 15:27:49.082433 4810 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 15:27:49 crc kubenswrapper[4810]: E0219 15:27:49.082600 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 15:27:49 crc kubenswrapper[4810]: E0219 15:27:49.082674 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift podName:37e2af25-5b30-4fb9-801e-f4a84d665540 nodeName:}" failed. No retries permitted until 2026-02-19 15:28:05.082639474 +0000 UTC m=+1114.564669598 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift") pod "swift-storage-0" (UID: "37e2af25-5b30-4fb9-801e-f4a84d665540") : configmap "swift-ring-files" not found Feb 19 15:27:49 crc kubenswrapper[4810]: I0219 15:27:49.358160 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:49 crc kubenswrapper[4810]: I0219 15:27:49.358230 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:49 crc kubenswrapper[4810]: I0219 15:27:49.505473 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:50 crc kubenswrapper[4810]: I0219 15:27:50.041545 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 15:27:50 crc kubenswrapper[4810]: I0219 15:27:50.920282 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a220bc57-3f31-4851-ad5c-9f61359f7de5","Type":"ContainerStarted","Data":"e279b42557dcc4bac021262be680408c24c0a74b26422b2a204f61141627de2f"} Feb 19 15:27:51 crc kubenswrapper[4810]: I0219 15:27:51.949410 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 15:27:52 crc kubenswrapper[4810]: I0219 15:27:52.055910 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 15:27:52 crc kubenswrapper[4810]: I0219 15:27:52.935820 4810 generic.go:334] "Generic (PLEG): container finished" podID="6c36f3e5-f790-4eda-9486-174f8624dad1" containerID="6446836f423aff91810aae3fb429a61dc4d7160b1082cc50f6cb50a21f6642d5" exitCode=0 Feb 19 15:27:52 crc kubenswrapper[4810]: I0219 15:27:52.935906 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hrdll" event={"ID":"6c36f3e5-f790-4eda-9486-174f8624dad1","Type":"ContainerDied","Data":"6446836f423aff91810aae3fb429a61dc4d7160b1082cc50f6cb50a21f6642d5"} Feb 19 15:27:52 crc kubenswrapper[4810]: I0219 15:27:52.938352 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a220bc57-3f31-4851-ad5c-9f61359f7de5","Type":"ContainerStarted","Data":"96840091b1cea0064312675928d8c3948ff53222c6323509a84e0412a49c9891"} Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.517943 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.677302 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-dispersionconf\") pod \"6c36f3e5-f790-4eda-9486-174f8624dad1\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.677363 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6c36f3e5-f790-4eda-9486-174f8624dad1-ring-data-devices\") pod \"6c36f3e5-f790-4eda-9486-174f8624dad1\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.677407 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fqq2\" (UniqueName: \"kubernetes.io/projected/6c36f3e5-f790-4eda-9486-174f8624dad1-kube-api-access-4fqq2\") pod \"6c36f3e5-f790-4eda-9486-174f8624dad1\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.677437 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c36f3e5-f790-4eda-9486-174f8624dad1-scripts\") pod \"6c36f3e5-f790-4eda-9486-174f8624dad1\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.677492 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-swiftconf\") pod \"6c36f3e5-f790-4eda-9486-174f8624dad1\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.677544 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-combined-ca-bundle\") pod \"6c36f3e5-f790-4eda-9486-174f8624dad1\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.677597 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6c36f3e5-f790-4eda-9486-174f8624dad1-etc-swift\") pod \"6c36f3e5-f790-4eda-9486-174f8624dad1\" (UID: \"6c36f3e5-f790-4eda-9486-174f8624dad1\") " Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.678643 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c36f3e5-f790-4eda-9486-174f8624dad1-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6c36f3e5-f790-4eda-9486-174f8624dad1" (UID: "6c36f3e5-f790-4eda-9486-174f8624dad1"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.678973 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c36f3e5-f790-4eda-9486-174f8624dad1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6c36f3e5-f790-4eda-9486-174f8624dad1" (UID: "6c36f3e5-f790-4eda-9486-174f8624dad1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.685653 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c36f3e5-f790-4eda-9486-174f8624dad1-kube-api-access-4fqq2" (OuterVolumeSpecName: "kube-api-access-4fqq2") pod "6c36f3e5-f790-4eda-9486-174f8624dad1" (UID: "6c36f3e5-f790-4eda-9486-174f8624dad1"). InnerVolumeSpecName "kube-api-access-4fqq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.688304 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6c36f3e5-f790-4eda-9486-174f8624dad1" (UID: "6c36f3e5-f790-4eda-9486-174f8624dad1"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.699139 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c36f3e5-f790-4eda-9486-174f8624dad1" (UID: "6c36f3e5-f790-4eda-9486-174f8624dad1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.700123 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6c36f3e5-f790-4eda-9486-174f8624dad1" (UID: "6c36f3e5-f790-4eda-9486-174f8624dad1"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.704942 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c36f3e5-f790-4eda-9486-174f8624dad1-scripts" (OuterVolumeSpecName: "scripts") pod "6c36f3e5-f790-4eda-9486-174f8624dad1" (UID: "6c36f3e5-f790-4eda-9486-174f8624dad1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.779813 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.779877 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6c36f3e5-f790-4eda-9486-174f8624dad1-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.779890 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fqq2\" (UniqueName: \"kubernetes.io/projected/6c36f3e5-f790-4eda-9486-174f8624dad1-kube-api-access-4fqq2\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.779901 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c36f3e5-f790-4eda-9486-174f8624dad1-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.779909 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.779932 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c36f3e5-f790-4eda-9486-174f8624dad1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:54 crc kubenswrapper[4810]: I0219 15:27:54.779942 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6c36f3e5-f790-4eda-9486-174f8624dad1-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:55 crc kubenswrapper[4810]: I0219 15:27:55.220895 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hrdll" event={"ID":"6c36f3e5-f790-4eda-9486-174f8624dad1","Type":"ContainerDied","Data":"d58b7703650c05a6b82d2a1f00bf308198af4dbc385a1e16d43b232960df6c3b"} Feb 19 15:27:55 crc kubenswrapper[4810]: I0219 15:27:55.220938 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d58b7703650c05a6b82d2a1f00bf308198af4dbc385a1e16d43b232960df6c3b" Feb 19 15:27:55 crc kubenswrapper[4810]: I0219 15:27:55.220963 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hrdll" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.609680 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-z6ghl"] Feb 19 15:27:56 crc kubenswrapper[4810]: E0219 15:27:56.610346 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5886c70b-ea09-4a9e-9c31-1689d32735a5" containerName="init" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.610365 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5886c70b-ea09-4a9e-9c31-1689d32735a5" containerName="init" Feb 19 15:27:56 crc kubenswrapper[4810]: E0219 15:27:56.610397 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5886c70b-ea09-4a9e-9c31-1689d32735a5" containerName="dnsmasq-dns" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.610405 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5886c70b-ea09-4a9e-9c31-1689d32735a5" containerName="dnsmasq-dns" Feb 19 15:27:56 crc kubenswrapper[4810]: E0219 15:27:56.610423 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c36f3e5-f790-4eda-9486-174f8624dad1" containerName="swift-ring-rebalance" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.610432 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c36f3e5-f790-4eda-9486-174f8624dad1" containerName="swift-ring-rebalance" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.610594 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="5886c70b-ea09-4a9e-9c31-1689d32735a5" containerName="dnsmasq-dns" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.610605 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c36f3e5-f790-4eda-9486-174f8624dad1" containerName="swift-ring-rebalance" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.611138 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z6ghl" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.618711 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.621481 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z6ghl"] Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.710224 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cf99137-c194-466c-b92b-fbda63f4b3d5-operator-scripts\") pod \"root-account-create-update-z6ghl\" (UID: \"0cf99137-c194-466c-b92b-fbda63f4b3d5\") " pod="openstack/root-account-create-update-z6ghl" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.710298 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7k59\" (UniqueName: \"kubernetes.io/projected/0cf99137-c194-466c-b92b-fbda63f4b3d5-kube-api-access-c7k59\") pod \"root-account-create-update-z6ghl\" (UID: \"0cf99137-c194-466c-b92b-fbda63f4b3d5\") " pod="openstack/root-account-create-update-z6ghl" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.812461 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cf99137-c194-466c-b92b-fbda63f4b3d5-operator-scripts\") pod \"root-account-create-update-z6ghl\" (UID: \"0cf99137-c194-466c-b92b-fbda63f4b3d5\") " pod="openstack/root-account-create-update-z6ghl" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.812911 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7k59\" (UniqueName: \"kubernetes.io/projected/0cf99137-c194-466c-b92b-fbda63f4b3d5-kube-api-access-c7k59\") pod \"root-account-create-update-z6ghl\" (UID: \"0cf99137-c194-466c-b92b-fbda63f4b3d5\") " pod="openstack/root-account-create-update-z6ghl" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.814483 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cf99137-c194-466c-b92b-fbda63f4b3d5-operator-scripts\") pod \"root-account-create-update-z6ghl\" (UID: \"0cf99137-c194-466c-b92b-fbda63f4b3d5\") " pod="openstack/root-account-create-update-z6ghl" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.833095 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7k59\" (UniqueName: \"kubernetes.io/projected/0cf99137-c194-466c-b92b-fbda63f4b3d5-kube-api-access-c7k59\") pod \"root-account-create-update-z6ghl\" (UID: \"0cf99137-c194-466c-b92b-fbda63f4b3d5\") " pod="openstack/root-account-create-update-z6ghl" Feb 19 15:27:56 crc kubenswrapper[4810]: I0219 15:27:56.932754 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z6ghl" Feb 19 15:27:57 crc kubenswrapper[4810]: I0219 15:27:57.401841 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z6ghl"] Feb 19 15:27:58 crc kubenswrapper[4810]: I0219 15:27:58.245312 4810 generic.go:334] "Generic (PLEG): container finished" podID="0cf99137-c194-466c-b92b-fbda63f4b3d5" containerID="99c27801bb39f1082a20de11443ab5b4c03a227dc67e8dce6456d77eb0a7c2db" exitCode=0 Feb 19 15:27:58 crc kubenswrapper[4810]: I0219 15:27:58.245366 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z6ghl" event={"ID":"0cf99137-c194-466c-b92b-fbda63f4b3d5","Type":"ContainerDied","Data":"99c27801bb39f1082a20de11443ab5b4c03a227dc67e8dce6456d77eb0a7c2db"} Feb 19 15:27:58 crc kubenswrapper[4810]: I0219 15:27:58.245743 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z6ghl" event={"ID":"0cf99137-c194-466c-b92b-fbda63f4b3d5","Type":"ContainerStarted","Data":"1f2afc66c4ebe08456c6e61c56714babfb410fb2b236afdbdd2a7670799cf58c"} Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.260016 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a220bc57-3f31-4851-ad5c-9f61359f7de5","Type":"ContainerStarted","Data":"e5ac3906ff8a232fe91505ae472e019caa241a902c02da4ff118ba00b0e5d016"} Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.295183 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=11.648452195 podStartE2EDuration="48.295164457s" podCreationTimestamp="2026-02-19 15:27:11 +0000 UTC" firstStartedPulling="2026-02-19 15:27:21.677479877 +0000 UTC m=+1071.159510001" lastFinishedPulling="2026-02-19 15:27:58.324192139 +0000 UTC m=+1107.806222263" observedRunningTime="2026-02-19 15:27:59.28591405 +0000 UTC m=+1108.767944204" watchObservedRunningTime="2026-02-19 15:27:59.295164457 +0000 UTC m=+1108.777194581" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.654202 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z6ghl" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.769933 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7k59\" (UniqueName: \"kubernetes.io/projected/0cf99137-c194-466c-b92b-fbda63f4b3d5-kube-api-access-c7k59\") pod \"0cf99137-c194-466c-b92b-fbda63f4b3d5\" (UID: \"0cf99137-c194-466c-b92b-fbda63f4b3d5\") " Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.770070 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cf99137-c194-466c-b92b-fbda63f4b3d5-operator-scripts\") pod \"0cf99137-c194-466c-b92b-fbda63f4b3d5\" (UID: \"0cf99137-c194-466c-b92b-fbda63f4b3d5\") " Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.770879 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cf99137-c194-466c-b92b-fbda63f4b3d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0cf99137-c194-466c-b92b-fbda63f4b3d5" (UID: "0cf99137-c194-466c-b92b-fbda63f4b3d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.778548 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf99137-c194-466c-b92b-fbda63f4b3d5-kube-api-access-c7k59" (OuterVolumeSpecName: "kube-api-access-c7k59") pod "0cf99137-c194-466c-b92b-fbda63f4b3d5" (UID: "0cf99137-c194-466c-b92b-fbda63f4b3d5"). InnerVolumeSpecName "kube-api-access-c7k59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.828256 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-pd6hg"] Feb 19 15:27:59 crc kubenswrapper[4810]: E0219 15:27:59.828719 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf99137-c194-466c-b92b-fbda63f4b3d5" containerName="mariadb-account-create-update" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.828740 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf99137-c194-466c-b92b-fbda63f4b3d5" containerName="mariadb-account-create-update" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.828920 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf99137-c194-466c-b92b-fbda63f4b3d5" containerName="mariadb-account-create-update" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.829565 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pd6hg" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.836216 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pd6hg"] Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.872464 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6g2w\" (UniqueName: \"kubernetes.io/projected/2222b6ca-79cd-48d7-b262-87e5cd4db6b1-kube-api-access-f6g2w\") pod \"glance-db-create-pd6hg\" (UID: \"2222b6ca-79cd-48d7-b262-87e5cd4db6b1\") " pod="openstack/glance-db-create-pd6hg" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.872568 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2222b6ca-79cd-48d7-b262-87e5cd4db6b1-operator-scripts\") pod \"glance-db-create-pd6hg\" (UID: \"2222b6ca-79cd-48d7-b262-87e5cd4db6b1\") " pod="openstack/glance-db-create-pd6hg" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.873006 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7k59\" (UniqueName: \"kubernetes.io/projected/0cf99137-c194-466c-b92b-fbda63f4b3d5-kube-api-access-c7k59\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.873049 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cf99137-c194-466c-b92b-fbda63f4b3d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.936112 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-79a4-account-create-update-mrm9x"] Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.937821 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-79a4-account-create-update-mrm9x" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.939631 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.943874 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-79a4-account-create-update-mrm9x"] Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.974887 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b22749-0497-48c2-b943-2c48aef05707-operator-scripts\") pod \"glance-79a4-account-create-update-mrm9x\" (UID: \"b4b22749-0497-48c2-b943-2c48aef05707\") " pod="openstack/glance-79a4-account-create-update-mrm9x" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.974924 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cpfn\" (UniqueName: \"kubernetes.io/projected/b4b22749-0497-48c2-b943-2c48aef05707-kube-api-access-6cpfn\") pod \"glance-79a4-account-create-update-mrm9x\" (UID: \"b4b22749-0497-48c2-b943-2c48aef05707\") " pod="openstack/glance-79a4-account-create-update-mrm9x" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.974997 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6g2w\" (UniqueName: \"kubernetes.io/projected/2222b6ca-79cd-48d7-b262-87e5cd4db6b1-kube-api-access-f6g2w\") pod \"glance-db-create-pd6hg\" (UID: \"2222b6ca-79cd-48d7-b262-87e5cd4db6b1\") " pod="openstack/glance-db-create-pd6hg" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.975057 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2222b6ca-79cd-48d7-b262-87e5cd4db6b1-operator-scripts\") pod \"glance-db-create-pd6hg\" (UID: \"2222b6ca-79cd-48d7-b262-87e5cd4db6b1\") " pod="openstack/glance-db-create-pd6hg" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.975763 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2222b6ca-79cd-48d7-b262-87e5cd4db6b1-operator-scripts\") pod \"glance-db-create-pd6hg\" (UID: \"2222b6ca-79cd-48d7-b262-87e5cd4db6b1\") " pod="openstack/glance-db-create-pd6hg" Feb 19 15:27:59 crc kubenswrapper[4810]: I0219 15:27:59.994169 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6g2w\" (UniqueName: \"kubernetes.io/projected/2222b6ca-79cd-48d7-b262-87e5cd4db6b1-kube-api-access-f6g2w\") pod \"glance-db-create-pd6hg\" (UID: \"2222b6ca-79cd-48d7-b262-87e5cd4db6b1\") " pod="openstack/glance-db-create-pd6hg" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.076722 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b22749-0497-48c2-b943-2c48aef05707-operator-scripts\") pod \"glance-79a4-account-create-update-mrm9x\" (UID: \"b4b22749-0497-48c2-b943-2c48aef05707\") " pod="openstack/glance-79a4-account-create-update-mrm9x" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.076785 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cpfn\" (UniqueName: \"kubernetes.io/projected/b4b22749-0497-48c2-b943-2c48aef05707-kube-api-access-6cpfn\") pod \"glance-79a4-account-create-update-mrm9x\" (UID: \"b4b22749-0497-48c2-b943-2c48aef05707\") " pod="openstack/glance-79a4-account-create-update-mrm9x" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.077691 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b22749-0497-48c2-b943-2c48aef05707-operator-scripts\") pod \"glance-79a4-account-create-update-mrm9x\" (UID: \"b4b22749-0497-48c2-b943-2c48aef05707\") " pod="openstack/glance-79a4-account-create-update-mrm9x" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.092425 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cpfn\" (UniqueName: \"kubernetes.io/projected/b4b22749-0497-48c2-b943-2c48aef05707-kube-api-access-6cpfn\") pod \"glance-79a4-account-create-update-mrm9x\" (UID: \"b4b22749-0497-48c2-b943-2c48aef05707\") " pod="openstack/glance-79a4-account-create-update-mrm9x" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.155521 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pd6hg" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.263619 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-79a4-account-create-update-mrm9x" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.271837 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z6ghl" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.271882 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z6ghl" event={"ID":"0cf99137-c194-466c-b92b-fbda63f4b3d5","Type":"ContainerDied","Data":"1f2afc66c4ebe08456c6e61c56714babfb410fb2b236afdbdd2a7670799cf58c"} Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.271910 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f2afc66c4ebe08456c6e61c56714babfb410fb2b236afdbdd2a7670799cf58c" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.399697 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.592682 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pd6hg"] Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.728669 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-79a4-account-create-update-mrm9x"] Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.867114 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8fq2p"] Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.868276 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fq2p" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.877425 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8fq2p"] Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.890363 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-002d-account-create-update-6kk29"] Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.891462 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-002d-account-create-update-6kk29" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.892611 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgx2j\" (UniqueName: \"kubernetes.io/projected/bca24a94-16a8-4b5b-9d99-bc98919feb21-kube-api-access-lgx2j\") pod \"keystone-db-create-8fq2p\" (UID: \"bca24a94-16a8-4b5b-9d99-bc98919feb21\") " pod="openstack/keystone-db-create-8fq2p" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.892683 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca24a94-16a8-4b5b-9d99-bc98919feb21-operator-scripts\") pod \"keystone-db-create-8fq2p\" (UID: \"bca24a94-16a8-4b5b-9d99-bc98919feb21\") " pod="openstack/keystone-db-create-8fq2p" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.893392 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.931147 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-002d-account-create-update-6kk29"] Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.993678 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjs86\" (UniqueName: \"kubernetes.io/projected/c141144d-36a6-4c0c-b764-d7453c101ea3-kube-api-access-fjs86\") pod \"keystone-002d-account-create-update-6kk29\" (UID: \"c141144d-36a6-4c0c-b764-d7453c101ea3\") " pod="openstack/keystone-002d-account-create-update-6kk29" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.993772 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgx2j\" (UniqueName: \"kubernetes.io/projected/bca24a94-16a8-4b5b-9d99-bc98919feb21-kube-api-access-lgx2j\") pod \"keystone-db-create-8fq2p\" (UID: \"bca24a94-16a8-4b5b-9d99-bc98919feb21\") " pod="openstack/keystone-db-create-8fq2p" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.993823 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca24a94-16a8-4b5b-9d99-bc98919feb21-operator-scripts\") pod \"keystone-db-create-8fq2p\" (UID: \"bca24a94-16a8-4b5b-9d99-bc98919feb21\") " pod="openstack/keystone-db-create-8fq2p" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.993846 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c141144d-36a6-4c0c-b764-d7453c101ea3-operator-scripts\") pod \"keystone-002d-account-create-update-6kk29\" (UID: \"c141144d-36a6-4c0c-b764-d7453c101ea3\") " pod="openstack/keystone-002d-account-create-update-6kk29" Feb 19 15:28:00 crc kubenswrapper[4810]: I0219 15:28:00.994894 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca24a94-16a8-4b5b-9d99-bc98919feb21-operator-scripts\") pod \"keystone-db-create-8fq2p\" (UID: \"bca24a94-16a8-4b5b-9d99-bc98919feb21\") " pod="openstack/keystone-db-create-8fq2p" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.013364 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgx2j\" (UniqueName: \"kubernetes.io/projected/bca24a94-16a8-4b5b-9d99-bc98919feb21-kube-api-access-lgx2j\") pod \"keystone-db-create-8fq2p\" (UID: \"bca24a94-16a8-4b5b-9d99-bc98919feb21\") " pod="openstack/keystone-db-create-8fq2p" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.065612 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hm7ql"] Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.081579 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c263-account-create-update-wz7k6"] Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.081769 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hm7ql" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.093003 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hm7ql"] Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.093257 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c263-account-create-update-wz7k6" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.114441 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c263-account-create-update-wz7k6"] Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.116711 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c141144d-36a6-4c0c-b764-d7453c101ea3-operator-scripts\") pod \"keystone-002d-account-create-update-6kk29\" (UID: \"c141144d-36a6-4c0c-b764-d7453c101ea3\") " pod="openstack/keystone-002d-account-create-update-6kk29" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.116789 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b6040a1-1df6-44da-ba23-4c7b1ccf17b1-operator-scripts\") pod \"placement-db-create-hm7ql\" (UID: \"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1\") " pod="openstack/placement-db-create-hm7ql" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.117037 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjs86\" (UniqueName: \"kubernetes.io/projected/c141144d-36a6-4c0c-b764-d7453c101ea3-kube-api-access-fjs86\") pod \"keystone-002d-account-create-update-6kk29\" (UID: \"c141144d-36a6-4c0c-b764-d7453c101ea3\") " pod="openstack/keystone-002d-account-create-update-6kk29" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.117143 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk4fc\" (UniqueName: \"kubernetes.io/projected/8b6040a1-1df6-44da-ba23-4c7b1ccf17b1-kube-api-access-fk4fc\") pod \"placement-db-create-hm7ql\" (UID: \"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1\") " pod="openstack/placement-db-create-hm7ql" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.117755 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c141144d-36a6-4c0c-b764-d7453c101ea3-operator-scripts\") pod \"keystone-002d-account-create-update-6kk29\" (UID: \"c141144d-36a6-4c0c-b764-d7453c101ea3\") " pod="openstack/keystone-002d-account-create-update-6kk29" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.118619 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.141786 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjs86\" (UniqueName: \"kubernetes.io/projected/c141144d-36a6-4c0c-b764-d7453c101ea3-kube-api-access-fjs86\") pod \"keystone-002d-account-create-update-6kk29\" (UID: \"c141144d-36a6-4c0c-b764-d7453c101ea3\") " pod="openstack/keystone-002d-account-create-update-6kk29" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.187726 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fq2p" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.213122 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-002d-account-create-update-6kk29" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.220850 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk4fc\" (UniqueName: \"kubernetes.io/projected/8b6040a1-1df6-44da-ba23-4c7b1ccf17b1-kube-api-access-fk4fc\") pod \"placement-db-create-hm7ql\" (UID: \"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1\") " pod="openstack/placement-db-create-hm7ql" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.221656 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b6040a1-1df6-44da-ba23-4c7b1ccf17b1-operator-scripts\") pod \"placement-db-create-hm7ql\" (UID: \"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1\") " pod="openstack/placement-db-create-hm7ql" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.224090 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b6040a1-1df6-44da-ba23-4c7b1ccf17b1-operator-scripts\") pod \"placement-db-create-hm7ql\" (UID: \"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1\") " pod="openstack/placement-db-create-hm7ql" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.224148 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8e48ca-d504-48a9-9e92-97651cd15d28-operator-scripts\") pod \"placement-c263-account-create-update-wz7k6\" (UID: \"9d8e48ca-d504-48a9-9e92-97651cd15d28\") " pod="openstack/placement-c263-account-create-update-wz7k6" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.224432 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzq74\" (UniqueName: \"kubernetes.io/projected/9d8e48ca-d504-48a9-9e92-97651cd15d28-kube-api-access-nzq74\") pod \"placement-c263-account-create-update-wz7k6\" (UID: \"9d8e48ca-d504-48a9-9e92-97651cd15d28\") " pod="openstack/placement-c263-account-create-update-wz7k6" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.251046 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk4fc\" (UniqueName: \"kubernetes.io/projected/8b6040a1-1df6-44da-ba23-4c7b1ccf17b1-kube-api-access-fk4fc\") pod \"placement-db-create-hm7ql\" (UID: \"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1\") " pod="openstack/placement-db-create-hm7ql" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.296700 4810 generic.go:334] "Generic (PLEG): container finished" podID="2222b6ca-79cd-48d7-b262-87e5cd4db6b1" containerID="9704b5e429194139e41b388c5c38c38e001c096f5c05263f386a9f8220160ce9" exitCode=0 Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.296778 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pd6hg" event={"ID":"2222b6ca-79cd-48d7-b262-87e5cd4db6b1","Type":"ContainerDied","Data":"9704b5e429194139e41b388c5c38c38e001c096f5c05263f386a9f8220160ce9"} Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.296803 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pd6hg" event={"ID":"2222b6ca-79cd-48d7-b262-87e5cd4db6b1","Type":"ContainerStarted","Data":"85c91ccb7dedab878f29dbfcb4d3c997a0045c38d5392bac957d809750393938"} Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.305445 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-79a4-account-create-update-mrm9x" event={"ID":"b4b22749-0497-48c2-b943-2c48aef05707","Type":"ContainerStarted","Data":"084e9f9fbe2e5f93a513bee51567c16a8dbaf61639fac09083f059ee237ae6a4"} Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.305516 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-79a4-account-create-update-mrm9x" event={"ID":"b4b22749-0497-48c2-b943-2c48aef05707","Type":"ContainerStarted","Data":"0e9ae7399f107576a8fdd2096ba7d6ad90d3f962b000b58e6b5164ee30012fd1"} Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.327124 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8e48ca-d504-48a9-9e92-97651cd15d28-operator-scripts\") pod \"placement-c263-account-create-update-wz7k6\" (UID: \"9d8e48ca-d504-48a9-9e92-97651cd15d28\") " pod="openstack/placement-c263-account-create-update-wz7k6" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.327236 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzq74\" (UniqueName: \"kubernetes.io/projected/9d8e48ca-d504-48a9-9e92-97651cd15d28-kube-api-access-nzq74\") pod \"placement-c263-account-create-update-wz7k6\" (UID: \"9d8e48ca-d504-48a9-9e92-97651cd15d28\") " pod="openstack/placement-c263-account-create-update-wz7k6" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.327864 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8e48ca-d504-48a9-9e92-97651cd15d28-operator-scripts\") pod \"placement-c263-account-create-update-wz7k6\" (UID: \"9d8e48ca-d504-48a9-9e92-97651cd15d28\") " pod="openstack/placement-c263-account-create-update-wz7k6" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.338890 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-79a4-account-create-update-mrm9x" podStartSLOduration=2.338871862 podStartE2EDuration="2.338871862s" podCreationTimestamp="2026-02-19 15:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:01.335191082 +0000 UTC m=+1110.817221206" watchObservedRunningTime="2026-02-19 15:28:01.338871862 +0000 UTC m=+1110.820901986" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.345775 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzq74\" (UniqueName: \"kubernetes.io/projected/9d8e48ca-d504-48a9-9e92-97651cd15d28-kube-api-access-nzq74\") pod \"placement-c263-account-create-update-wz7k6\" (UID: \"9d8e48ca-d504-48a9-9e92-97651cd15d28\") " pod="openstack/placement-c263-account-create-update-wz7k6" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.398337 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hm7ql" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.436933 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c263-account-create-update-wz7k6" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.744895 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-002d-account-create-update-6kk29"] Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.823893 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8fq2p"] Feb 19 15:28:01 crc kubenswrapper[4810]: W0219 15:28:01.826567 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbca24a94_16a8_4b5b_9d99_bc98919feb21.slice/crio-2f15ac378d0660eacd4f50163aac8b0feb80fb1849a53563f3a697096f3125d7 WatchSource:0}: Error finding container 2f15ac378d0660eacd4f50163aac8b0feb80fb1849a53563f3a697096f3125d7: Status 404 returned error can't find the container with id 2f15ac378d0660eacd4f50163aac8b0feb80fb1849a53563f3a697096f3125d7 Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.918975 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hm7ql"] Feb 19 15:28:01 crc kubenswrapper[4810]: W0219 15:28:01.922592 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b6040a1_1df6_44da_ba23_4c7b1ccf17b1.slice/crio-66d4f8540c8e3cd863482a260aa47693da07db5108ec65200fcc7ea0ffcf0e79 WatchSource:0}: Error finding container 66d4f8540c8e3cd863482a260aa47693da07db5108ec65200fcc7ea0ffcf0e79: Status 404 returned error can't find the container with id 66d4f8540c8e3cd863482a260aa47693da07db5108ec65200fcc7ea0ffcf0e79 Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.986026 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-ffjjc"] Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.987715 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-ffjjc" Feb 19 15:28:01 crc kubenswrapper[4810]: I0219 15:28:01.994776 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-ffjjc"] Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.003516 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c263-account-create-update-wz7k6"] Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.038643 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21eb5702-ca94-449c-839c-e3970593417d-operator-scripts\") pod \"watcher-db-create-ffjjc\" (UID: \"21eb5702-ca94-449c-839c-e3970593417d\") " pod="openstack/watcher-db-create-ffjjc" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.038691 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2frb\" (UniqueName: \"kubernetes.io/projected/21eb5702-ca94-449c-839c-e3970593417d-kube-api-access-q2frb\") pod \"watcher-db-create-ffjjc\" (UID: \"21eb5702-ca94-449c-839c-e3970593417d\") " pod="openstack/watcher-db-create-ffjjc" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.066562 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-a42d-account-create-update-l2kgw"] Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.067574 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a42d-account-create-update-l2kgw" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.069753 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.078816 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-a42d-account-create-update-l2kgw"] Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.140110 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-522qg\" (UniqueName: \"kubernetes.io/projected/7f791f64-69f8-448d-8370-aeef0db30071-kube-api-access-522qg\") pod \"watcher-a42d-account-create-update-l2kgw\" (UID: \"7f791f64-69f8-448d-8370-aeef0db30071\") " pod="openstack/watcher-a42d-account-create-update-l2kgw" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.140153 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21eb5702-ca94-449c-839c-e3970593417d-operator-scripts\") pod \"watcher-db-create-ffjjc\" (UID: \"21eb5702-ca94-449c-839c-e3970593417d\") " pod="openstack/watcher-db-create-ffjjc" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.140184 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2frb\" (UniqueName: \"kubernetes.io/projected/21eb5702-ca94-449c-839c-e3970593417d-kube-api-access-q2frb\") pod \"watcher-db-create-ffjjc\" (UID: \"21eb5702-ca94-449c-839c-e3970593417d\") " pod="openstack/watcher-db-create-ffjjc" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.140258 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f791f64-69f8-448d-8370-aeef0db30071-operator-scripts\") pod \"watcher-a42d-account-create-update-l2kgw\" (UID: \"7f791f64-69f8-448d-8370-aeef0db30071\") " pod="openstack/watcher-a42d-account-create-update-l2kgw" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.141313 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21eb5702-ca94-449c-839c-e3970593417d-operator-scripts\") pod \"watcher-db-create-ffjjc\" (UID: \"21eb5702-ca94-449c-839c-e3970593417d\") " pod="openstack/watcher-db-create-ffjjc" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.163445 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2frb\" (UniqueName: \"kubernetes.io/projected/21eb5702-ca94-449c-839c-e3970593417d-kube-api-access-q2frb\") pod \"watcher-db-create-ffjjc\" (UID: \"21eb5702-ca94-449c-839c-e3970593417d\") " pod="openstack/watcher-db-create-ffjjc" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.241665 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f791f64-69f8-448d-8370-aeef0db30071-operator-scripts\") pod \"watcher-a42d-account-create-update-l2kgw\" (UID: \"7f791f64-69f8-448d-8370-aeef0db30071\") " pod="openstack/watcher-a42d-account-create-update-l2kgw" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.241769 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-522qg\" (UniqueName: \"kubernetes.io/projected/7f791f64-69f8-448d-8370-aeef0db30071-kube-api-access-522qg\") pod \"watcher-a42d-account-create-update-l2kgw\" (UID: \"7f791f64-69f8-448d-8370-aeef0db30071\") " pod="openstack/watcher-a42d-account-create-update-l2kgw" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.243684 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f791f64-69f8-448d-8370-aeef0db30071-operator-scripts\") pod \"watcher-a42d-account-create-update-l2kgw\" (UID: \"7f791f64-69f8-448d-8370-aeef0db30071\") " pod="openstack/watcher-a42d-account-create-update-l2kgw" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.259023 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-522qg\" (UniqueName: \"kubernetes.io/projected/7f791f64-69f8-448d-8370-aeef0db30071-kube-api-access-522qg\") pod \"watcher-a42d-account-create-update-l2kgw\" (UID: \"7f791f64-69f8-448d-8370-aeef0db30071\") " pod="openstack/watcher-a42d-account-create-update-l2kgw" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.315513 4810 generic.go:334] "Generic (PLEG): container finished" podID="b4b22749-0497-48c2-b943-2c48aef05707" containerID="084e9f9fbe2e5f93a513bee51567c16a8dbaf61639fac09083f059ee237ae6a4" exitCode=0 Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.315597 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-79a4-account-create-update-mrm9x" event={"ID":"b4b22749-0497-48c2-b943-2c48aef05707","Type":"ContainerDied","Data":"084e9f9fbe2e5f93a513bee51567c16a8dbaf61639fac09083f059ee237ae6a4"} Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.319702 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c263-account-create-update-wz7k6" event={"ID":"9d8e48ca-d504-48a9-9e92-97651cd15d28","Type":"ContainerStarted","Data":"683b765d3388918ee0690173c641c6f414e8fc77c164afb4ab566f37723b326b"} Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.320187 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c263-account-create-update-wz7k6" event={"ID":"9d8e48ca-d504-48a9-9e92-97651cd15d28","Type":"ContainerStarted","Data":"dc2b8768ce86d039d1a4f235de22e0f1357ab21c7bcf98cf5982179ca5a156ce"} Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.322275 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fq2p" event={"ID":"bca24a94-16a8-4b5b-9d99-bc98919feb21","Type":"ContainerStarted","Data":"78c0fb5a6a2ddab1d7b49b378f905fccf1b07a5af8d34ee0f62b947801682e49"} Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.322305 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fq2p" event={"ID":"bca24a94-16a8-4b5b-9d99-bc98919feb21","Type":"ContainerStarted","Data":"2f15ac378d0660eacd4f50163aac8b0feb80fb1849a53563f3a697096f3125d7"} Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.324409 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hm7ql" event={"ID":"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1","Type":"ContainerStarted","Data":"72214a8edd0c54f3823201969c4eb1d1b241f1f9c89ed676fa59ee81e422993e"} Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.324451 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hm7ql" event={"ID":"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1","Type":"ContainerStarted","Data":"66d4f8540c8e3cd863482a260aa47693da07db5108ec65200fcc7ea0ffcf0e79"} Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.326715 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-002d-account-create-update-6kk29" event={"ID":"c141144d-36a6-4c0c-b764-d7453c101ea3","Type":"ContainerStarted","Data":"7503cb280210459a8b150bda1b5d65c5f4d10619291800104ee52fa8927bdb82"} Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.326767 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-002d-account-create-update-6kk29" event={"ID":"c141144d-36a6-4c0c-b764-d7453c101ea3","Type":"ContainerStarted","Data":"0afa87d29255170cab02cb057e707fc9c3faa4ab5535334cc3ac3057731edf16"} Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.340938 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-ffjjc" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.349372 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c263-account-create-update-wz7k6" podStartSLOduration=1.349348697 podStartE2EDuration="1.349348697s" podCreationTimestamp="2026-02-19 15:28:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:02.346161819 +0000 UTC m=+1111.828191943" watchObservedRunningTime="2026-02-19 15:28:02.349348697 +0000 UTC m=+1111.831378821" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.364418 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-hm7ql" podStartSLOduration=1.364397125 podStartE2EDuration="1.364397125s" podCreationTimestamp="2026-02-19 15:28:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:02.360603083 +0000 UTC m=+1111.842633207" watchObservedRunningTime="2026-02-19 15:28:02.364397125 +0000 UTC m=+1111.846427249" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.379996 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-002d-account-create-update-6kk29" podStartSLOduration=2.379978047 podStartE2EDuration="2.379978047s" podCreationTimestamp="2026-02-19 15:28:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:02.374729128 +0000 UTC m=+1111.856759252" watchObservedRunningTime="2026-02-19 15:28:02.379978047 +0000 UTC m=+1111.862008171" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.393086 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a42d-account-create-update-l2kgw" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.398665 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-8fq2p" podStartSLOduration=2.398647754 podStartE2EDuration="2.398647754s" podCreationTimestamp="2026-02-19 15:28:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:02.394042021 +0000 UTC m=+1111.876072145" watchObservedRunningTime="2026-02-19 15:28:02.398647754 +0000 UTC m=+1111.880677878" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.767583 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pd6hg" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.854631 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6g2w\" (UniqueName: \"kubernetes.io/projected/2222b6ca-79cd-48d7-b262-87e5cd4db6b1-kube-api-access-f6g2w\") pod \"2222b6ca-79cd-48d7-b262-87e5cd4db6b1\" (UID: \"2222b6ca-79cd-48d7-b262-87e5cd4db6b1\") " Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.855009 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2222b6ca-79cd-48d7-b262-87e5cd4db6b1-operator-scripts\") pod \"2222b6ca-79cd-48d7-b262-87e5cd4db6b1\" (UID: \"2222b6ca-79cd-48d7-b262-87e5cd4db6b1\") " Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.855799 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2222b6ca-79cd-48d7-b262-87e5cd4db6b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2222b6ca-79cd-48d7-b262-87e5cd4db6b1" (UID: "2222b6ca-79cd-48d7-b262-87e5cd4db6b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.880013 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2222b6ca-79cd-48d7-b262-87e5cd4db6b1-kube-api-access-f6g2w" (OuterVolumeSpecName: "kube-api-access-f6g2w") pod "2222b6ca-79cd-48d7-b262-87e5cd4db6b1" (UID: "2222b6ca-79cd-48d7-b262-87e5cd4db6b1"). InnerVolumeSpecName "kube-api-access-f6g2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.940743 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-ffjjc"] Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.956998 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6g2w\" (UniqueName: \"kubernetes.io/projected/2222b6ca-79cd-48d7-b262-87e5cd4db6b1-kube-api-access-f6g2w\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:02 crc kubenswrapper[4810]: I0219 15:28:02.957027 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2222b6ca-79cd-48d7-b262-87e5cd4db6b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.032167 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-z6ghl"] Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.039659 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-z6ghl"] Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.074878 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-a42d-account-create-update-l2kgw"] Feb 19 15:28:03 crc kubenswrapper[4810]: W0219 15:28:03.078752 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f791f64_69f8_448d_8370_aeef0db30071.slice/crio-7a67cdf1df2baa9907f5e2da0518cbfe07eb77699db82e7ea4f435cb906f9c75 WatchSource:0}: Error finding container 7a67cdf1df2baa9907f5e2da0518cbfe07eb77699db82e7ea4f435cb906f9c75: Status 404 returned error can't find the container with id 7a67cdf1df2baa9907f5e2da0518cbfe07eb77699db82e7ea4f435cb906f9c75 Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.303346 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.335666 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pd6hg" event={"ID":"2222b6ca-79cd-48d7-b262-87e5cd4db6b1","Type":"ContainerDied","Data":"85c91ccb7dedab878f29dbfcb4d3c997a0045c38d5392bac957d809750393938"} Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.335703 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85c91ccb7dedab878f29dbfcb4d3c997a0045c38d5392bac957d809750393938" Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.335719 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pd6hg" Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.340799 4810 generic.go:334] "Generic (PLEG): container finished" podID="c141144d-36a6-4c0c-b764-d7453c101ea3" containerID="7503cb280210459a8b150bda1b5d65c5f4d10619291800104ee52fa8927bdb82" exitCode=0 Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.340844 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-002d-account-create-update-6kk29" event={"ID":"c141144d-36a6-4c0c-b764-d7453c101ea3","Type":"ContainerDied","Data":"7503cb280210459a8b150bda1b5d65c5f4d10619291800104ee52fa8927bdb82"} Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.342253 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a42d-account-create-update-l2kgw" event={"ID":"7f791f64-69f8-448d-8370-aeef0db30071","Type":"ContainerStarted","Data":"e9bec1a534d25c6a14471d437a183afe877fd56061c47642147a9b763a2c6190"} Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.342302 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a42d-account-create-update-l2kgw" event={"ID":"7f791f64-69f8-448d-8370-aeef0db30071","Type":"ContainerStarted","Data":"7a67cdf1df2baa9907f5e2da0518cbfe07eb77699db82e7ea4f435cb906f9c75"} Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.348146 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-ffjjc" event={"ID":"21eb5702-ca94-449c-839c-e3970593417d","Type":"ContainerStarted","Data":"ffa1569c8787f552547599568a5b882194ae4226f8c9d82766a9a36d606eb91a"} Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.348204 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-ffjjc" event={"ID":"21eb5702-ca94-449c-839c-e3970593417d","Type":"ContainerStarted","Data":"a4e1ddd0bcf56dcdcf7a8d77529bbac7c3a7ecc8e18e47545f5e7f391f76a850"} Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.349964 4810 generic.go:334] "Generic (PLEG): container finished" podID="9d8e48ca-d504-48a9-9e92-97651cd15d28" containerID="683b765d3388918ee0690173c641c6f414e8fc77c164afb4ab566f37723b326b" exitCode=0 Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.350040 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c263-account-create-update-wz7k6" event={"ID":"9d8e48ca-d504-48a9-9e92-97651cd15d28","Type":"ContainerDied","Data":"683b765d3388918ee0690173c641c6f414e8fc77c164afb4ab566f37723b326b"} Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.356882 4810 generic.go:334] "Generic (PLEG): container finished" podID="bca24a94-16a8-4b5b-9d99-bc98919feb21" containerID="78c0fb5a6a2ddab1d7b49b378f905fccf1b07a5af8d34ee0f62b947801682e49" exitCode=0 Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.356975 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fq2p" event={"ID":"bca24a94-16a8-4b5b-9d99-bc98919feb21","Type":"ContainerDied","Data":"78c0fb5a6a2ddab1d7b49b378f905fccf1b07a5af8d34ee0f62b947801682e49"} Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.358473 4810 generic.go:334] "Generic (PLEG): container finished" podID="8b6040a1-1df6-44da-ba23-4c7b1ccf17b1" containerID="72214a8edd0c54f3823201969c4eb1d1b241f1f9c89ed676fa59ee81e422993e" exitCode=0 Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.358682 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hm7ql" event={"ID":"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1","Type":"ContainerDied","Data":"72214a8edd0c54f3823201969c4eb1d1b241f1f9c89ed676fa59ee81e422993e"} Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.380652 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-ffjjc" podStartSLOduration=2.380631391 podStartE2EDuration="2.380631391s" podCreationTimestamp="2026-02-19 15:28:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:03.375369372 +0000 UTC m=+1112.857399496" watchObservedRunningTime="2026-02-19 15:28:03.380631391 +0000 UTC m=+1112.862661515" Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.438187 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-a42d-account-create-update-l2kgw" podStartSLOduration=1.43817054 podStartE2EDuration="1.43817054s" podCreationTimestamp="2026-02-19 15:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:03.436998811 +0000 UTC m=+1112.919028935" watchObservedRunningTime="2026-02-19 15:28:03.43817054 +0000 UTC m=+1112.920200664" Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.449210 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf99137-c194-466c-b92b-fbda63f4b3d5" path="/var/lib/kubelet/pods/0cf99137-c194-466c-b92b-fbda63f4b3d5/volumes" Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.621290 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-79a4-account-create-update-mrm9x" Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.679481 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b22749-0497-48c2-b943-2c48aef05707-operator-scripts\") pod \"b4b22749-0497-48c2-b943-2c48aef05707\" (UID: \"b4b22749-0497-48c2-b943-2c48aef05707\") " Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.679589 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cpfn\" (UniqueName: \"kubernetes.io/projected/b4b22749-0497-48c2-b943-2c48aef05707-kube-api-access-6cpfn\") pod \"b4b22749-0497-48c2-b943-2c48aef05707\" (UID: \"b4b22749-0497-48c2-b943-2c48aef05707\") " Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.681080 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b22749-0497-48c2-b943-2c48aef05707-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4b22749-0497-48c2-b943-2c48aef05707" (UID: "b4b22749-0497-48c2-b943-2c48aef05707"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.685559 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b22749-0497-48c2-b943-2c48aef05707-kube-api-access-6cpfn" (OuterVolumeSpecName: "kube-api-access-6cpfn") pod "b4b22749-0497-48c2-b943-2c48aef05707" (UID: "b4b22749-0497-48c2-b943-2c48aef05707"). InnerVolumeSpecName "kube-api-access-6cpfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.781734 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b22749-0497-48c2-b943-2c48aef05707-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:03 crc kubenswrapper[4810]: I0219 15:28:03.781987 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cpfn\" (UniqueName: \"kubernetes.io/projected/b4b22749-0497-48c2-b943-2c48aef05707-kube-api-access-6cpfn\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.374082 4810 generic.go:334] "Generic (PLEG): container finished" podID="7f791f64-69f8-448d-8370-aeef0db30071" containerID="e9bec1a534d25c6a14471d437a183afe877fd56061c47642147a9b763a2c6190" exitCode=0 Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.374182 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a42d-account-create-update-l2kgw" event={"ID":"7f791f64-69f8-448d-8370-aeef0db30071","Type":"ContainerDied","Data":"e9bec1a534d25c6a14471d437a183afe877fd56061c47642147a9b763a2c6190"} Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.381268 4810 generic.go:334] "Generic (PLEG): container finished" podID="21eb5702-ca94-449c-839c-e3970593417d" containerID="ffa1569c8787f552547599568a5b882194ae4226f8c9d82766a9a36d606eb91a" exitCode=0 Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.381366 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-ffjjc" event={"ID":"21eb5702-ca94-449c-839c-e3970593417d","Type":"ContainerDied","Data":"ffa1569c8787f552547599568a5b882194ae4226f8c9d82766a9a36d606eb91a"} Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.384349 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-79a4-account-create-update-mrm9x" Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.384438 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-79a4-account-create-update-mrm9x" event={"ID":"b4b22749-0497-48c2-b943-2c48aef05707","Type":"ContainerDied","Data":"0e9ae7399f107576a8fdd2096ba7d6ad90d3f962b000b58e6b5164ee30012fd1"} Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.384482 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e9ae7399f107576a8fdd2096ba7d6ad90d3f962b000b58e6b5164ee30012fd1" Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.821842 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fq2p" Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.898731 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca24a94-16a8-4b5b-9d99-bc98919feb21-operator-scripts\") pod \"bca24a94-16a8-4b5b-9d99-bc98919feb21\" (UID: \"bca24a94-16a8-4b5b-9d99-bc98919feb21\") " Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.898781 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgx2j\" (UniqueName: \"kubernetes.io/projected/bca24a94-16a8-4b5b-9d99-bc98919feb21-kube-api-access-lgx2j\") pod \"bca24a94-16a8-4b5b-9d99-bc98919feb21\" (UID: \"bca24a94-16a8-4b5b-9d99-bc98919feb21\") " Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.900535 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bca24a94-16a8-4b5b-9d99-bc98919feb21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bca24a94-16a8-4b5b-9d99-bc98919feb21" (UID: "bca24a94-16a8-4b5b-9d99-bc98919feb21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.904245 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca24a94-16a8-4b5b-9d99-bc98919feb21-kube-api-access-lgx2j" (OuterVolumeSpecName: "kube-api-access-lgx2j") pod "bca24a94-16a8-4b5b-9d99-bc98919feb21" (UID: "bca24a94-16a8-4b5b-9d99-bc98919feb21"). InnerVolumeSpecName "kube-api-access-lgx2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.961550 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c263-account-create-update-wz7k6" Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.966914 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-002d-account-create-update-6kk29" Feb 19 15:28:04 crc kubenswrapper[4810]: I0219 15:28:04.976230 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hm7ql" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.000999 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8e48ca-d504-48a9-9e92-97651cd15d28-operator-scripts\") pod \"9d8e48ca-d504-48a9-9e92-97651cd15d28\" (UID: \"9d8e48ca-d504-48a9-9e92-97651cd15d28\") " Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.001053 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjs86\" (UniqueName: \"kubernetes.io/projected/c141144d-36a6-4c0c-b764-d7453c101ea3-kube-api-access-fjs86\") pod \"c141144d-36a6-4c0c-b764-d7453c101ea3\" (UID: \"c141144d-36a6-4c0c-b764-d7453c101ea3\") " Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.001115 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b6040a1-1df6-44da-ba23-4c7b1ccf17b1-operator-scripts\") pod \"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1\" (UID: \"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1\") " Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.001163 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk4fc\" (UniqueName: \"kubernetes.io/projected/8b6040a1-1df6-44da-ba23-4c7b1ccf17b1-kube-api-access-fk4fc\") pod \"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1\" (UID: \"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1\") " Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.001203 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzq74\" (UniqueName: \"kubernetes.io/projected/9d8e48ca-d504-48a9-9e92-97651cd15d28-kube-api-access-nzq74\") pod \"9d8e48ca-d504-48a9-9e92-97651cd15d28\" (UID: \"9d8e48ca-d504-48a9-9e92-97651cd15d28\") " Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.001320 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c141144d-36a6-4c0c-b764-d7453c101ea3-operator-scripts\") pod \"c141144d-36a6-4c0c-b764-d7453c101ea3\" (UID: \"c141144d-36a6-4c0c-b764-d7453c101ea3\") " Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.001778 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca24a94-16a8-4b5b-9d99-bc98919feb21-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.001801 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgx2j\" (UniqueName: \"kubernetes.io/projected/bca24a94-16a8-4b5b-9d99-bc98919feb21-kube-api-access-lgx2j\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.002287 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c141144d-36a6-4c0c-b764-d7453c101ea3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c141144d-36a6-4c0c-b764-d7453c101ea3" (UID: "c141144d-36a6-4c0c-b764-d7453c101ea3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.004419 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8e48ca-d504-48a9-9e92-97651cd15d28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d8e48ca-d504-48a9-9e92-97651cd15d28" (UID: "9d8e48ca-d504-48a9-9e92-97651cd15d28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.005736 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6040a1-1df6-44da-ba23-4c7b1ccf17b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b6040a1-1df6-44da-ba23-4c7b1ccf17b1" (UID: "8b6040a1-1df6-44da-ba23-4c7b1ccf17b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.011595 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c141144d-36a6-4c0c-b764-d7453c101ea3-kube-api-access-fjs86" (OuterVolumeSpecName: "kube-api-access-fjs86") pod "c141144d-36a6-4c0c-b764-d7453c101ea3" (UID: "c141144d-36a6-4c0c-b764-d7453c101ea3"). InnerVolumeSpecName "kube-api-access-fjs86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.011726 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6040a1-1df6-44da-ba23-4c7b1ccf17b1-kube-api-access-fk4fc" (OuterVolumeSpecName: "kube-api-access-fk4fc") pod "8b6040a1-1df6-44da-ba23-4c7b1ccf17b1" (UID: "8b6040a1-1df6-44da-ba23-4c7b1ccf17b1"). InnerVolumeSpecName "kube-api-access-fk4fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.011620 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8e48ca-d504-48a9-9e92-97651cd15d28-kube-api-access-nzq74" (OuterVolumeSpecName: "kube-api-access-nzq74") pod "9d8e48ca-d504-48a9-9e92-97651cd15d28" (UID: "9d8e48ca-d504-48a9-9e92-97651cd15d28"). InnerVolumeSpecName "kube-api-access-nzq74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.078732 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kpf4t"] Feb 19 15:28:05 crc kubenswrapper[4810]: E0219 15:28:05.079230 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6040a1-1df6-44da-ba23-4c7b1ccf17b1" containerName="mariadb-database-create" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.079254 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6040a1-1df6-44da-ba23-4c7b1ccf17b1" containerName="mariadb-database-create" Feb 19 15:28:05 crc kubenswrapper[4810]: E0219 15:28:05.079270 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8e48ca-d504-48a9-9e92-97651cd15d28" containerName="mariadb-account-create-update" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.079277 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8e48ca-d504-48a9-9e92-97651cd15d28" containerName="mariadb-account-create-update" Feb 19 15:28:05 crc kubenswrapper[4810]: E0219 15:28:05.079295 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b22749-0497-48c2-b943-2c48aef05707" containerName="mariadb-account-create-update" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.079303 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b22749-0497-48c2-b943-2c48aef05707" containerName="mariadb-account-create-update" Feb 19 15:28:05 crc kubenswrapper[4810]: E0219 15:28:05.079347 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c141144d-36a6-4c0c-b764-d7453c101ea3" containerName="mariadb-account-create-update" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.079358 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c141144d-36a6-4c0c-b764-d7453c101ea3" containerName="mariadb-account-create-update" Feb 19 15:28:05 crc kubenswrapper[4810]: E0219 15:28:05.079374 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2222b6ca-79cd-48d7-b262-87e5cd4db6b1" containerName="mariadb-database-create" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.079383 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2222b6ca-79cd-48d7-b262-87e5cd4db6b1" containerName="mariadb-database-create" Feb 19 15:28:05 crc kubenswrapper[4810]: E0219 15:28:05.079396 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca24a94-16a8-4b5b-9d99-bc98919feb21" containerName="mariadb-database-create" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.079403 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca24a94-16a8-4b5b-9d99-bc98919feb21" containerName="mariadb-database-create" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.079617 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b22749-0497-48c2-b943-2c48aef05707" containerName="mariadb-account-create-update" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.079644 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b6040a1-1df6-44da-ba23-4c7b1ccf17b1" containerName="mariadb-database-create" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.079656 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c141144d-36a6-4c0c-b764-d7453c101ea3" containerName="mariadb-account-create-update" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.079675 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d8e48ca-d504-48a9-9e92-97651cd15d28" containerName="mariadb-account-create-update" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.079687 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2222b6ca-79cd-48d7-b262-87e5cd4db6b1" containerName="mariadb-database-create" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.079706 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca24a94-16a8-4b5b-9d99-bc98919feb21" containerName="mariadb-database-create" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.080436 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.082969 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.083262 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lrdct" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.098164 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kpf4t"] Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.103135 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-config-data\") pod \"glance-db-sync-kpf4t\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.103181 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5tqp\" (UniqueName: \"kubernetes.io/projected/31093793-65b6-467c-8d5b-218e108fd330-kube-api-access-h5tqp\") pod \"glance-db-sync-kpf4t\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.103260 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-db-sync-config-data\") pod \"glance-db-sync-kpf4t\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.103292 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-combined-ca-bundle\") pod \"glance-db-sync-kpf4t\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.103366 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.103414 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjs86\" (UniqueName: \"kubernetes.io/projected/c141144d-36a6-4c0c-b764-d7453c101ea3-kube-api-access-fjs86\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.103425 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b6040a1-1df6-44da-ba23-4c7b1ccf17b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.103434 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk4fc\" (UniqueName: \"kubernetes.io/projected/8b6040a1-1df6-44da-ba23-4c7b1ccf17b1-kube-api-access-fk4fc\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.103444 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzq74\" (UniqueName: \"kubernetes.io/projected/9d8e48ca-d504-48a9-9e92-97651cd15d28-kube-api-access-nzq74\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.103452 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c141144d-36a6-4c0c-b764-d7453c101ea3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.103514 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d8e48ca-d504-48a9-9e92-97651cd15d28-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.110027 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/37e2af25-5b30-4fb9-801e-f4a84d665540-etc-swift\") pod \"swift-storage-0\" (UID: \"37e2af25-5b30-4fb9-801e-f4a84d665540\") " pod="openstack/swift-storage-0" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.204678 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-config-data\") pod \"glance-db-sync-kpf4t\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.205027 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5tqp\" (UniqueName: \"kubernetes.io/projected/31093793-65b6-467c-8d5b-218e108fd330-kube-api-access-h5tqp\") pod \"glance-db-sync-kpf4t\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.205173 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-db-sync-config-data\") pod \"glance-db-sync-kpf4t\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.205259 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-combined-ca-bundle\") pod \"glance-db-sync-kpf4t\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.207709 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-config-data\") pod \"glance-db-sync-kpf4t\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.207782 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-db-sync-config-data\") pod \"glance-db-sync-kpf4t\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.213543 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-combined-ca-bundle\") pod \"glance-db-sync-kpf4t\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.220611 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5tqp\" (UniqueName: \"kubernetes.io/projected/31093793-65b6-467c-8d5b-218e108fd330-kube-api-access-h5tqp\") pod \"glance-db-sync-kpf4t\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.345448 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-s5488" podUID="4a4fa57b-aa00-4866-b31e-df29f7f86480" containerName="ovn-controller" probeResult="failure" output=< Feb 19 15:28:05 crc kubenswrapper[4810]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 15:28:05 crc kubenswrapper[4810]: > Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.392012 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.402295 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.425231 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fq2p" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.425244 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fq2p" event={"ID":"bca24a94-16a8-4b5b-9d99-bc98919feb21","Type":"ContainerDied","Data":"2f15ac378d0660eacd4f50163aac8b0feb80fb1849a53563f3a697096f3125d7"} Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.425309 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f15ac378d0660eacd4f50163aac8b0feb80fb1849a53563f3a697096f3125d7" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.427351 4810 generic.go:334] "Generic (PLEG): container finished" podID="4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c" containerID="781b07acf23d18cc10631b3a01aa0eb27d1e62e7e3cfc8db109ba3d58b915ff0" exitCode=0 Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.427394 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c","Type":"ContainerDied","Data":"781b07acf23d18cc10631b3a01aa0eb27d1e62e7e3cfc8db109ba3d58b915ff0"} Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.430715 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hm7ql" event={"ID":"8b6040a1-1df6-44da-ba23-4c7b1ccf17b1","Type":"ContainerDied","Data":"66d4f8540c8e3cd863482a260aa47693da07db5108ec65200fcc7ea0ffcf0e79"} Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.430749 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66d4f8540c8e3cd863482a260aa47693da07db5108ec65200fcc7ea0ffcf0e79" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.430805 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hm7ql" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.434134 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-002d-account-create-update-6kk29" event={"ID":"c141144d-36a6-4c0c-b764-d7453c101ea3","Type":"ContainerDied","Data":"0afa87d29255170cab02cb057e707fc9c3faa4ab5535334cc3ac3057731edf16"} Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.434181 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0afa87d29255170cab02cb057e707fc9c3faa4ab5535334cc3ac3057731edf16" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.434196 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-002d-account-create-update-6kk29" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.436134 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c263-account-create-update-wz7k6" event={"ID":"9d8e48ca-d504-48a9-9e92-97651cd15d28","Type":"ContainerDied","Data":"dc2b8768ce86d039d1a4f235de22e0f1357ab21c7bcf98cf5982179ca5a156ce"} Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.436161 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c263-account-create-update-wz7k6" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.436204 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc2b8768ce86d039d1a4f235de22e0f1357ab21c7bcf98cf5982179ca5a156ce" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.437968 4810 generic.go:334] "Generic (PLEG): container finished" podID="2a3676ed-f06f-4dea-82a1-959716331113" containerID="d8e4585cb9bc557ddc8d8dc697ba266361d794bec388e11f6081c9ec077a1726" exitCode=0 Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.438081 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a3676ed-f06f-4dea-82a1-959716331113","Type":"ContainerDied","Data":"d8e4585cb9bc557ddc8d8dc697ba266361d794bec388e11f6081c9ec077a1726"} Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.850255 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-ffjjc" Feb 19 15:28:05 crc kubenswrapper[4810]: I0219 15:28:05.931174 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a42d-account-create-update-l2kgw" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.024515 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21eb5702-ca94-449c-839c-e3970593417d-operator-scripts\") pod \"21eb5702-ca94-449c-839c-e3970593417d\" (UID: \"21eb5702-ca94-449c-839c-e3970593417d\") " Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.024557 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f791f64-69f8-448d-8370-aeef0db30071-operator-scripts\") pod \"7f791f64-69f8-448d-8370-aeef0db30071\" (UID: \"7f791f64-69f8-448d-8370-aeef0db30071\") " Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.024679 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-522qg\" (UniqueName: \"kubernetes.io/projected/7f791f64-69f8-448d-8370-aeef0db30071-kube-api-access-522qg\") pod \"7f791f64-69f8-448d-8370-aeef0db30071\" (UID: \"7f791f64-69f8-448d-8370-aeef0db30071\") " Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.024900 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2frb\" (UniqueName: \"kubernetes.io/projected/21eb5702-ca94-449c-839c-e3970593417d-kube-api-access-q2frb\") pod \"21eb5702-ca94-449c-839c-e3970593417d\" (UID: \"21eb5702-ca94-449c-839c-e3970593417d\") " Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.025669 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f791f64-69f8-448d-8370-aeef0db30071-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f791f64-69f8-448d-8370-aeef0db30071" (UID: "7f791f64-69f8-448d-8370-aeef0db30071"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.025672 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21eb5702-ca94-449c-839c-e3970593417d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21eb5702-ca94-449c-839c-e3970593417d" (UID: "21eb5702-ca94-449c-839c-e3970593417d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.026289 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21eb5702-ca94-449c-839c-e3970593417d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.026313 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f791f64-69f8-448d-8370-aeef0db30071-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.030105 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21eb5702-ca94-449c-839c-e3970593417d-kube-api-access-q2frb" (OuterVolumeSpecName: "kube-api-access-q2frb") pod "21eb5702-ca94-449c-839c-e3970593417d" (UID: "21eb5702-ca94-449c-839c-e3970593417d"). InnerVolumeSpecName "kube-api-access-q2frb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.051015 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f791f64-69f8-448d-8370-aeef0db30071-kube-api-access-522qg" (OuterVolumeSpecName: "kube-api-access-522qg") pod "7f791f64-69f8-448d-8370-aeef0db30071" (UID: "7f791f64-69f8-448d-8370-aeef0db30071"). InnerVolumeSpecName "kube-api-access-522qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.128118 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-522qg\" (UniqueName: \"kubernetes.io/projected/7f791f64-69f8-448d-8370-aeef0db30071-kube-api-access-522qg\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.128162 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2frb\" (UniqueName: \"kubernetes.io/projected/21eb5702-ca94-449c-839c-e3970593417d-kube-api-access-q2frb\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:06 crc kubenswrapper[4810]: W0219 15:28:06.172726 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37e2af25_5b30_4fb9_801e_f4a84d665540.slice/crio-45d9895cbd2409d4877c5d09b01ea6adaa17628d68fb6990723440872a02d8ff WatchSource:0}: Error finding container 45d9895cbd2409d4877c5d09b01ea6adaa17628d68fb6990723440872a02d8ff: Status 404 returned error can't find the container with id 45d9895cbd2409d4877c5d09b01ea6adaa17628d68fb6990723440872a02d8ff Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.175680 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.446937 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-ffjjc" event={"ID":"21eb5702-ca94-449c-839c-e3970593417d","Type":"ContainerDied","Data":"a4e1ddd0bcf56dcdcf7a8d77529bbac7c3a7ecc8e18e47545f5e7f391f76a850"} Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.446977 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4e1ddd0bcf56dcdcf7a8d77529bbac7c3a7ecc8e18e47545f5e7f391f76a850" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.447030 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-ffjjc" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.451589 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a3676ed-f06f-4dea-82a1-959716331113","Type":"ContainerStarted","Data":"dbc27193c644946d801635eac3f4ac8fd7e93e4ac9ea0ee7cb027f5cec7cc199"} Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.451791 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.455424 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c","Type":"ContainerStarted","Data":"a47a4ee84b2d0fae427bb4579e5143bbbc5bb37b1a0a8a5a2b3b47a263edc8d0"} Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.455615 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.459687 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"45d9895cbd2409d4877c5d09b01ea6adaa17628d68fb6990723440872a02d8ff"} Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.460836 4810 generic.go:334] "Generic (PLEG): container finished" podID="00bcfb03-4357-4343-99a5-30dc7f25abe9" containerID="5f65c0deba7b3077c5501137f00e319288d66ec1245a0e431539e6d1d5d3d67c" exitCode=0 Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.460898 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"00bcfb03-4357-4343-99a5-30dc7f25abe9","Type":"ContainerDied","Data":"5f65c0deba7b3077c5501137f00e319288d66ec1245a0e431539e6d1d5d3d67c"} Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.463657 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a42d-account-create-update-l2kgw" event={"ID":"7f791f64-69f8-448d-8370-aeef0db30071","Type":"ContainerDied","Data":"7a67cdf1df2baa9907f5e2da0518cbfe07eb77699db82e7ea4f435cb906f9c75"} Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.463699 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a67cdf1df2baa9907f5e2da0518cbfe07eb77699db82e7ea4f435cb906f9c75" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.463718 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a42d-account-create-update-l2kgw" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.485576 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.121225893 podStartE2EDuration="1m1.485553593s" podCreationTimestamp="2026-02-19 15:27:05 +0000 UTC" firstStartedPulling="2026-02-19 15:27:20.81481788 +0000 UTC m=+1070.296848004" lastFinishedPulling="2026-02-19 15:27:30.17914558 +0000 UTC m=+1079.661175704" observedRunningTime="2026-02-19 15:28:06.473196411 +0000 UTC m=+1115.955226545" watchObservedRunningTime="2026-02-19 15:28:06.485553593 +0000 UTC m=+1115.967583717" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.533393 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=52.766031208 podStartE2EDuration="1m2.533368634s" podCreationTimestamp="2026-02-19 15:27:04 +0000 UTC" firstStartedPulling="2026-02-19 15:27:20.804720103 +0000 UTC m=+1070.286750227" lastFinishedPulling="2026-02-19 15:27:30.572057529 +0000 UTC m=+1080.054087653" observedRunningTime="2026-02-19 15:28:06.524963028 +0000 UTC m=+1116.006993152" watchObservedRunningTime="2026-02-19 15:28:06.533368634 +0000 UTC m=+1116.015398798" Feb 19 15:28:06 crc kubenswrapper[4810]: I0219 15:28:06.869814 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kpf4t"] Feb 19 15:28:06 crc kubenswrapper[4810]: W0219 15:28:06.882145 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31093793_65b6_467c_8d5b_218e108fd330.slice/crio-bf6ea40573eeae6956f4018d71eb8ba4737a0b7a57f0e9d98686d8f2e0c053a9 WatchSource:0}: Error finding container bf6ea40573eeae6956f4018d71eb8ba4737a0b7a57f0e9d98686d8f2e0c053a9: Status 404 returned error can't find the container with id bf6ea40573eeae6956f4018d71eb8ba4737a0b7a57f0e9d98686d8f2e0c053a9 Feb 19 15:28:07 crc kubenswrapper[4810]: I0219 15:28:07.478020 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kpf4t" event={"ID":"31093793-65b6-467c-8d5b-218e108fd330","Type":"ContainerStarted","Data":"bf6ea40573eeae6956f4018d71eb8ba4737a0b7a57f0e9d98686d8f2e0c053a9"} Feb 19 15:28:07 crc kubenswrapper[4810]: I0219 15:28:07.479540 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"1ba1d37d6ad8e9c154ffe9f1daa0f1bf53029b1342beeac2e90156b0bd5d9b19"} Feb 19 15:28:07 crc kubenswrapper[4810]: I0219 15:28:07.483007 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"00bcfb03-4357-4343-99a5-30dc7f25abe9","Type":"ContainerStarted","Data":"c5b200cb5f29f71ee660d52a8a35bdae1c6f7de01b68b55aa0a763c8a2cc371f"} Feb 19 15:28:07 crc kubenswrapper[4810]: I0219 15:28:07.483682 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.071345 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.310539915 podStartE2EDuration="1m4.07131166s" podCreationTimestamp="2026-02-19 15:27:04 +0000 UTC" firstStartedPulling="2026-02-19 15:27:20.811287514 +0000 UTC m=+1070.293317638" lastFinishedPulling="2026-02-19 15:27:30.572059249 +0000 UTC m=+1080.054089383" observedRunningTime="2026-02-19 15:28:07.533229509 +0000 UTC m=+1117.015259633" watchObservedRunningTime="2026-02-19 15:28:08.07131166 +0000 UTC m=+1117.553341784" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.071955 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-j78h8"] Feb 19 15:28:08 crc kubenswrapper[4810]: E0219 15:28:08.072266 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f791f64-69f8-448d-8370-aeef0db30071" containerName="mariadb-account-create-update" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.072281 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f791f64-69f8-448d-8370-aeef0db30071" containerName="mariadb-account-create-update" Feb 19 15:28:08 crc kubenswrapper[4810]: E0219 15:28:08.072311 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21eb5702-ca94-449c-839c-e3970593417d" containerName="mariadb-database-create" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.072318 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="21eb5702-ca94-449c-839c-e3970593417d" containerName="mariadb-database-create" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.072484 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f791f64-69f8-448d-8370-aeef0db30071" containerName="mariadb-account-create-update" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.072504 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="21eb5702-ca94-449c-839c-e3970593417d" containerName="mariadb-database-create" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.073070 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j78h8" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.083252 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.087605 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j78h8"] Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.170357 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e12a1f8-d78c-41b0-b295-e5e661bf0820-operator-scripts\") pod \"root-account-create-update-j78h8\" (UID: \"8e12a1f8-d78c-41b0-b295-e5e661bf0820\") " pod="openstack/root-account-create-update-j78h8" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.170469 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzmnf\" (UniqueName: \"kubernetes.io/projected/8e12a1f8-d78c-41b0-b295-e5e661bf0820-kube-api-access-vzmnf\") pod \"root-account-create-update-j78h8\" (UID: \"8e12a1f8-d78c-41b0-b295-e5e661bf0820\") " pod="openstack/root-account-create-update-j78h8" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.272465 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e12a1f8-d78c-41b0-b295-e5e661bf0820-operator-scripts\") pod \"root-account-create-update-j78h8\" (UID: \"8e12a1f8-d78c-41b0-b295-e5e661bf0820\") " pod="openstack/root-account-create-update-j78h8" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.272557 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzmnf\" (UniqueName: \"kubernetes.io/projected/8e12a1f8-d78c-41b0-b295-e5e661bf0820-kube-api-access-vzmnf\") pod \"root-account-create-update-j78h8\" (UID: \"8e12a1f8-d78c-41b0-b295-e5e661bf0820\") " pod="openstack/root-account-create-update-j78h8" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.273159 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e12a1f8-d78c-41b0-b295-e5e661bf0820-operator-scripts\") pod \"root-account-create-update-j78h8\" (UID: \"8e12a1f8-d78c-41b0-b295-e5e661bf0820\") " pod="openstack/root-account-create-update-j78h8" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.292693 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzmnf\" (UniqueName: \"kubernetes.io/projected/8e12a1f8-d78c-41b0-b295-e5e661bf0820-kube-api-access-vzmnf\") pod \"root-account-create-update-j78h8\" (UID: \"8e12a1f8-d78c-41b0-b295-e5e661bf0820\") " pod="openstack/root-account-create-update-j78h8" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.402768 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j78h8" Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.493418 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"cd302b333f4e25ad0c36ebdbc56eef1b56c9908008ee49f7079b0bece263e84b"} Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.493663 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"9fd9aa9263382e1efda3414490d21caa5e02f1ef981b536f01a4b062b8959f11"} Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.493674 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"d50773999dfe3be05193a3f63c80d61aafc1b17203612313ef1633afce7cfe53"} Feb 19 15:28:08 crc kubenswrapper[4810]: I0219 15:28:08.838496 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j78h8"] Feb 19 15:28:09 crc kubenswrapper[4810]: W0219 15:28:09.108558 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e12a1f8_d78c_41b0_b295_e5e661bf0820.slice/crio-ccc35fad754bea5f86e280f1f346423215e3e8c4c6dc61ec34321a2365fec438 WatchSource:0}: Error finding container ccc35fad754bea5f86e280f1f346423215e3e8c4c6dc61ec34321a2365fec438: Status 404 returned error can't find the container with id ccc35fad754bea5f86e280f1f346423215e3e8c4c6dc61ec34321a2365fec438 Feb 19 15:28:09 crc kubenswrapper[4810]: I0219 15:28:09.501559 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j78h8" event={"ID":"8e12a1f8-d78c-41b0-b295-e5e661bf0820","Type":"ContainerStarted","Data":"f70ca73c865d94282b27c6f5f6e86e7e4679dda8ba68c283e08b2a6314c29261"} Feb 19 15:28:09 crc kubenswrapper[4810]: I0219 15:28:09.501902 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j78h8" event={"ID":"8e12a1f8-d78c-41b0-b295-e5e661bf0820","Type":"ContainerStarted","Data":"ccc35fad754bea5f86e280f1f346423215e3e8c4c6dc61ec34321a2365fec438"} Feb 19 15:28:09 crc kubenswrapper[4810]: I0219 15:28:09.506894 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"660d66f469b3973c5e645c0ac3a97338ff5859a9678d2775d702a8f431f12f4e"} Feb 19 15:28:09 crc kubenswrapper[4810]: I0219 15:28:09.522813 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-j78h8" podStartSLOduration=1.522793799 podStartE2EDuration="1.522793799s" podCreationTimestamp="2026-02-19 15:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:09.516247669 +0000 UTC m=+1118.998277813" watchObservedRunningTime="2026-02-19 15:28:09.522793799 +0000 UTC m=+1119.004823923" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.361608 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.376468 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-s5488" podUID="4a4fa57b-aa00-4866-b31e-df29f7f86480" containerName="ovn-controller" probeResult="failure" output=< Feb 19 15:28:10 crc kubenswrapper[4810]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 15:28:10 crc kubenswrapper[4810]: > Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.380676 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5t6ds" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.520359 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"e4fb6d2bda16502d024736976d935c4fe81f687c15140b0dd94266e3e3d9f390"} Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.520417 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"2732d970ffbc02e85ff4de493b80613565b0be8ca47af7642d3c73044ca3fb1e"} Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.520432 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"2341827275b751933bf24c3e78ff3b3ecc526884571fd989fcb9aec28efc66bc"} Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.524192 4810 generic.go:334] "Generic (PLEG): container finished" podID="8e12a1f8-d78c-41b0-b295-e5e661bf0820" containerID="f70ca73c865d94282b27c6f5f6e86e7e4679dda8ba68c283e08b2a6314c29261" exitCode=0 Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.524703 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j78h8" event={"ID":"8e12a1f8-d78c-41b0-b295-e5e661bf0820","Type":"ContainerDied","Data":"f70ca73c865d94282b27c6f5f6e86e7e4679dda8ba68c283e08b2a6314c29261"} Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.596610 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s5488-config-vj8mr"] Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.597736 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.599728 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.608191 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s5488-config-vj8mr"] Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.724297 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-scripts\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.724358 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-run\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.724617 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-additional-scripts\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.724676 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-run-ovn\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.724797 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-log-ovn\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.724985 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmcqb\" (UniqueName: \"kubernetes.io/projected/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-kube-api-access-qmcqb\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.827237 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-scripts\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.827777 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-run\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.828061 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-additional-scripts\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.828238 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-run-ovn\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.828132 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-run\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.828365 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-log-ovn\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.828402 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-log-ovn\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.828369 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-run-ovn\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.828659 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmcqb\" (UniqueName: \"kubernetes.io/projected/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-kube-api-access-qmcqb\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.841793 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-additional-scripts\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.846871 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-scripts\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.860519 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmcqb\" (UniqueName: \"kubernetes.io/projected/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-kube-api-access-qmcqb\") pod \"ovn-controller-s5488-config-vj8mr\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:10 crc kubenswrapper[4810]: I0219 15:28:10.923683 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:11 crc kubenswrapper[4810]: I0219 15:28:11.480426 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s5488-config-vj8mr"] Feb 19 15:28:11 crc kubenswrapper[4810]: I0219 15:28:11.541516 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"029a7644f1b3f3adfa640290641c929aa41a9c5077caf3bc1c5fe0762c760b0a"} Feb 19 15:28:11 crc kubenswrapper[4810]: I0219 15:28:11.541571 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"5452ca22717c7fdc7d9e2a9180cd6e5790679477011844d93e17b4c181ed1493"} Feb 19 15:28:11 crc kubenswrapper[4810]: I0219 15:28:11.543693 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s5488-config-vj8mr" event={"ID":"cd94f6ba-17f4-407d-96ce-aafc0d390ef0","Type":"ContainerStarted","Data":"845fa0b026213cafa08bcde7439eb4aca0fbbbffe9ee97506b610230e0f93d27"} Feb 19 15:28:11 crc kubenswrapper[4810]: I0219 15:28:11.844937 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j78h8" Feb 19 15:28:11 crc kubenswrapper[4810]: I0219 15:28:11.946299 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e12a1f8-d78c-41b0-b295-e5e661bf0820-operator-scripts\") pod \"8e12a1f8-d78c-41b0-b295-e5e661bf0820\" (UID: \"8e12a1f8-d78c-41b0-b295-e5e661bf0820\") " Feb 19 15:28:11 crc kubenswrapper[4810]: I0219 15:28:11.946457 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzmnf\" (UniqueName: \"kubernetes.io/projected/8e12a1f8-d78c-41b0-b295-e5e661bf0820-kube-api-access-vzmnf\") pod \"8e12a1f8-d78c-41b0-b295-e5e661bf0820\" (UID: \"8e12a1f8-d78c-41b0-b295-e5e661bf0820\") " Feb 19 15:28:11 crc kubenswrapper[4810]: I0219 15:28:11.947984 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e12a1f8-d78c-41b0-b295-e5e661bf0820-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e12a1f8-d78c-41b0-b295-e5e661bf0820" (UID: "8e12a1f8-d78c-41b0-b295-e5e661bf0820"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:11 crc kubenswrapper[4810]: I0219 15:28:11.954101 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e12a1f8-d78c-41b0-b295-e5e661bf0820-kube-api-access-vzmnf" (OuterVolumeSpecName: "kube-api-access-vzmnf") pod "8e12a1f8-d78c-41b0-b295-e5e661bf0820" (UID: "8e12a1f8-d78c-41b0-b295-e5e661bf0820"). InnerVolumeSpecName "kube-api-access-vzmnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.048097 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e12a1f8-d78c-41b0-b295-e5e661bf0820-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.048141 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzmnf\" (UniqueName: \"kubernetes.io/projected/8e12a1f8-d78c-41b0-b295-e5e661bf0820-kube-api-access-vzmnf\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.570806 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"d7afe90f92f6c863fef61af81f424cfcdca8a085d68a7769e175b490d0c93e90"} Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.570861 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"c5da4be9591440c1f64482455b1dff1d9275f518bf8bdbfe449a495712984261"} Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.570874 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"5d1b1a39a5e1076c953861b63e89de6b9de74961d48072fedbe67b9154a5d5e7"} Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.570885 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"6ab9e7f3cd670e2b5593371b7f00c38d53866f044a217c5b17cd350777f09512"} Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.570896 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"37e2af25-5b30-4fb9-801e-f4a84d665540","Type":"ContainerStarted","Data":"c903e9c02663a0961112a44749a7b2bd76f40928f112da91a67b24fd0c29d9bc"} Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.573759 4810 generic.go:334] "Generic (PLEG): container finished" podID="cd94f6ba-17f4-407d-96ce-aafc0d390ef0" containerID="c96da79c9ab27a3dc86e77ea8607bc39b965b2f04ca64ded9b1c4a74386d352e" exitCode=0 Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.573917 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s5488-config-vj8mr" event={"ID":"cd94f6ba-17f4-407d-96ce-aafc0d390ef0","Type":"ContainerDied","Data":"c96da79c9ab27a3dc86e77ea8607bc39b965b2f04ca64ded9b1c4a74386d352e"} Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.576930 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j78h8" event={"ID":"8e12a1f8-d78c-41b0-b295-e5e661bf0820","Type":"ContainerDied","Data":"ccc35fad754bea5f86e280f1f346423215e3e8c4c6dc61ec34321a2365fec438"} Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.576963 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc35fad754bea5f86e280f1f346423215e3e8c4c6dc61ec34321a2365fec438" Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.576985 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j78h8" Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.636247 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=35.765587287 podStartE2EDuration="40.636228501s" podCreationTimestamp="2026-02-19 15:27:32 +0000 UTC" firstStartedPulling="2026-02-19 15:28:06.17494336 +0000 UTC m=+1115.656973484" lastFinishedPulling="2026-02-19 15:28:11.045584584 +0000 UTC m=+1120.527614698" observedRunningTime="2026-02-19 15:28:12.624397501 +0000 UTC m=+1122.106427625" watchObservedRunningTime="2026-02-19 15:28:12.636228501 +0000 UTC m=+1122.118258625" Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.916003 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6999bddfcf-fzf7g"] Feb 19 15:28:12 crc kubenswrapper[4810]: E0219 15:28:12.916403 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e12a1f8-d78c-41b0-b295-e5e661bf0820" containerName="mariadb-account-create-update" Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.916418 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e12a1f8-d78c-41b0-b295-e5e661bf0820" containerName="mariadb-account-create-update" Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.916630 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e12a1f8-d78c-41b0-b295-e5e661bf0820" containerName="mariadb-account-create-update" Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.922983 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.924904 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 19 15:28:12 crc kubenswrapper[4810]: I0219 15:28:12.941053 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6999bddfcf-fzf7g"] Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.073852 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-config\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.073924 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.073952 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.074127 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-dns-svc\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.074292 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckj5k\" (UniqueName: \"kubernetes.io/projected/2584fed3-16a4-489e-bf03-1c7461e9d3d8-kube-api-access-ckj5k\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.074422 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.176150 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-dns-svc\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.176230 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckj5k\" (UniqueName: \"kubernetes.io/projected/2584fed3-16a4-489e-bf03-1c7461e9d3d8-kube-api-access-ckj5k\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.176265 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.176348 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-config\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.176386 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.176409 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.177313 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.177687 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.177797 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-config\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.177901 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.178306 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-dns-svc\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.203410 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckj5k\" (UniqueName: \"kubernetes.io/projected/2584fed3-16a4-489e-bf03-1c7461e9d3d8-kube-api-access-ckj5k\") pod \"dnsmasq-dns-6999bddfcf-fzf7g\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.276732 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.304170 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.307708 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:13 crc kubenswrapper[4810]: I0219 15:28:13.592222 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:15 crc kubenswrapper[4810]: I0219 15:28:15.344806 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-s5488" Feb 19 15:28:15 crc kubenswrapper[4810]: I0219 15:28:15.888228 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 19 15:28:15 crc kubenswrapper[4810]: I0219 15:28:15.938827 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:28:15 crc kubenswrapper[4810]: I0219 15:28:15.939073 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="prometheus" containerID="cri-o://e279b42557dcc4bac021262be680408c24c0a74b26422b2a204f61141627de2f" gracePeriod=600 Feb 19 15:28:15 crc kubenswrapper[4810]: I0219 15:28:15.939456 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="thanos-sidecar" containerID="cri-o://e5ac3906ff8a232fe91505ae472e019caa241a902c02da4ff118ba00b0e5d016" gracePeriod=600 Feb 19 15:28:15 crc kubenswrapper[4810]: I0219 15:28:15.939524 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="config-reloader" containerID="cri-o://96840091b1cea0064312675928d8c3948ff53222c6323509a84e0412a49c9891" gracePeriod=600 Feb 19 15:28:16 crc kubenswrapper[4810]: I0219 15:28:16.134206 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="00bcfb03-4357-4343-99a5-30dc7f25abe9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 19 15:28:16 crc kubenswrapper[4810]: I0219 15:28:16.444834 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2a3676ed-f06f-4dea-82a1-959716331113" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 19 15:28:16 crc kubenswrapper[4810]: I0219 15:28:16.619242 4810 generic.go:334] "Generic (PLEG): container finished" podID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerID="e5ac3906ff8a232fe91505ae472e019caa241a902c02da4ff118ba00b0e5d016" exitCode=0 Feb 19 15:28:16 crc kubenswrapper[4810]: I0219 15:28:16.619289 4810 generic.go:334] "Generic (PLEG): container finished" podID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerID="96840091b1cea0064312675928d8c3948ff53222c6323509a84e0412a49c9891" exitCode=0 Feb 19 15:28:16 crc kubenswrapper[4810]: I0219 15:28:16.619303 4810 generic.go:334] "Generic (PLEG): container finished" podID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerID="e279b42557dcc4bac021262be680408c24c0a74b26422b2a204f61141627de2f" exitCode=0 Feb 19 15:28:16 crc kubenswrapper[4810]: I0219 15:28:16.619350 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a220bc57-3f31-4851-ad5c-9f61359f7de5","Type":"ContainerDied","Data":"e5ac3906ff8a232fe91505ae472e019caa241a902c02da4ff118ba00b0e5d016"} Feb 19 15:28:16 crc kubenswrapper[4810]: I0219 15:28:16.619379 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a220bc57-3f31-4851-ad5c-9f61359f7de5","Type":"ContainerDied","Data":"96840091b1cea0064312675928d8c3948ff53222c6323509a84e0412a49c9891"} Feb 19 15:28:16 crc kubenswrapper[4810]: I0219 15:28:16.619396 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a220bc57-3f31-4851-ad5c-9f61359f7de5","Type":"ContainerDied","Data":"e279b42557dcc4bac021262be680408c24c0a74b26422b2a204f61141627de2f"} Feb 19 15:28:18 crc kubenswrapper[4810]: I0219 15:28:18.304172 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Feb 19 15:28:19 crc kubenswrapper[4810]: I0219 15:28:19.538211 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:28:19 crc kubenswrapper[4810]: I0219 15:28:19.538287 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:28:20 crc kubenswrapper[4810]: I0219 15:28:20.959383 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.009902 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-scripts\") pod \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.010031 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-run\") pod \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.010078 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmcqb\" (UniqueName: \"kubernetes.io/projected/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-kube-api-access-qmcqb\") pod \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.010109 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-run-ovn\") pod \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.010184 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-additional-scripts\") pod \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.010230 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-log-ovn\") pod \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\" (UID: \"cd94f6ba-17f4-407d-96ce-aafc0d390ef0\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.010618 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "cd94f6ba-17f4-407d-96ce-aafc0d390ef0" (UID: "cd94f6ba-17f4-407d-96ce-aafc0d390ef0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.010646 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-run" (OuterVolumeSpecName: "var-run") pod "cd94f6ba-17f4-407d-96ce-aafc0d390ef0" (UID: "cd94f6ba-17f4-407d-96ce-aafc0d390ef0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.011219 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "cd94f6ba-17f4-407d-96ce-aafc0d390ef0" (UID: "cd94f6ba-17f4-407d-96ce-aafc0d390ef0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.011637 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "cd94f6ba-17f4-407d-96ce-aafc0d390ef0" (UID: "cd94f6ba-17f4-407d-96ce-aafc0d390ef0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.012385 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-scripts" (OuterVolumeSpecName: "scripts") pod "cd94f6ba-17f4-407d-96ce-aafc0d390ef0" (UID: "cd94f6ba-17f4-407d-96ce-aafc0d390ef0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.013924 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-kube-api-access-qmcqb" (OuterVolumeSpecName: "kube-api-access-qmcqb") pod "cd94f6ba-17f4-407d-96ce-aafc0d390ef0" (UID: "cd94f6ba-17f4-407d-96ce-aafc0d390ef0"). InnerVolumeSpecName "kube-api-access-qmcqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.113157 4810 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.113203 4810 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.113214 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.113224 4810 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.113235 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmcqb\" (UniqueName: \"kubernetes.io/projected/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-kube-api-access-qmcqb\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.113246 4810 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd94f6ba-17f4-407d-96ce-aafc0d390ef0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.125438 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.214205 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-1\") pod \"a220bc57-3f31-4851-ad5c-9f61359f7de5\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.214273 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-web-config\") pod \"a220bc57-3f31-4851-ad5c-9f61359f7de5\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.214365 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-thanos-prometheus-http-client-file\") pod \"a220bc57-3f31-4851-ad5c-9f61359f7de5\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.214430 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-0\") pod \"a220bc57-3f31-4851-ad5c-9f61359f7de5\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.214511 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-config\") pod \"a220bc57-3f31-4851-ad5c-9f61359f7de5\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.214653 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"a220bc57-3f31-4851-ad5c-9f61359f7de5\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.214718 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a220bc57-3f31-4851-ad5c-9f61359f7de5-config-out\") pod \"a220bc57-3f31-4851-ad5c-9f61359f7de5\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.214783 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ks5p\" (UniqueName: \"kubernetes.io/projected/a220bc57-3f31-4851-ad5c-9f61359f7de5-kube-api-access-8ks5p\") pod \"a220bc57-3f31-4851-ad5c-9f61359f7de5\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.214819 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-2\") pod \"a220bc57-3f31-4851-ad5c-9f61359f7de5\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.214854 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a220bc57-3f31-4851-ad5c-9f61359f7de5-tls-assets\") pod \"a220bc57-3f31-4851-ad5c-9f61359f7de5\" (UID: \"a220bc57-3f31-4851-ad5c-9f61359f7de5\") " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.215171 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "a220bc57-3f31-4851-ad5c-9f61359f7de5" (UID: "a220bc57-3f31-4851-ad5c-9f61359f7de5"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.215347 4810 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.216078 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "a220bc57-3f31-4851-ad5c-9f61359f7de5" (UID: "a220bc57-3f31-4851-ad5c-9f61359f7de5"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.217050 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "a220bc57-3f31-4851-ad5c-9f61359f7de5" (UID: "a220bc57-3f31-4851-ad5c-9f61359f7de5"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.217779 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-config" (OuterVolumeSpecName: "config") pod "a220bc57-3f31-4851-ad5c-9f61359f7de5" (UID: "a220bc57-3f31-4851-ad5c-9f61359f7de5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.221702 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "a220bc57-3f31-4851-ad5c-9f61359f7de5" (UID: "a220bc57-3f31-4851-ad5c-9f61359f7de5"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.222857 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a220bc57-3f31-4851-ad5c-9f61359f7de5-kube-api-access-8ks5p" (OuterVolumeSpecName: "kube-api-access-8ks5p") pod "a220bc57-3f31-4851-ad5c-9f61359f7de5" (UID: "a220bc57-3f31-4851-ad5c-9f61359f7de5"). InnerVolumeSpecName "kube-api-access-8ks5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.222928 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a220bc57-3f31-4851-ad5c-9f61359f7de5-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "a220bc57-3f31-4851-ad5c-9f61359f7de5" (UID: "a220bc57-3f31-4851-ad5c-9f61359f7de5"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.223032 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a220bc57-3f31-4851-ad5c-9f61359f7de5-config-out" (OuterVolumeSpecName: "config-out") pod "a220bc57-3f31-4851-ad5c-9f61359f7de5" (UID: "a220bc57-3f31-4851-ad5c-9f61359f7de5"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.236074 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "a220bc57-3f31-4851-ad5c-9f61359f7de5" (UID: "a220bc57-3f31-4851-ad5c-9f61359f7de5"). InnerVolumeSpecName "pvc-b3b143f1-488b-49bf-8792-af0d760f341e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.249820 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-web-config" (OuterVolumeSpecName: "web-config") pod "a220bc57-3f31-4851-ad5c-9f61359f7de5" (UID: "a220bc57-3f31-4851-ad5c-9f61359f7de5"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.317693 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.317772 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") on node \"crc\" " Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.317793 4810 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a220bc57-3f31-4851-ad5c-9f61359f7de5-config-out\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.317815 4810 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.317832 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ks5p\" (UniqueName: \"kubernetes.io/projected/a220bc57-3f31-4851-ad5c-9f61359f7de5-kube-api-access-8ks5p\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.317845 4810 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a220bc57-3f31-4851-ad5c-9f61359f7de5-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.317855 4810 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-web-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.317867 4810 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a220bc57-3f31-4851-ad5c-9f61359f7de5-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.317878 4810 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a220bc57-3f31-4851-ad5c-9f61359f7de5-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.338259 4810 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.338415 4810 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b3b143f1-488b-49bf-8792-af0d760f341e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e") on node "crc" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.362585 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6999bddfcf-fzf7g"] Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.419209 4810 reconciler_common.go:293] "Volume detached for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.663621 4810 generic.go:334] "Generic (PLEG): container finished" podID="2584fed3-16a4-489e-bf03-1c7461e9d3d8" containerID="821ae759a5ba32197a93051b435e6f01263ceab4bc6d3d77eccce527b39b8143" exitCode=0 Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.663684 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" event={"ID":"2584fed3-16a4-489e-bf03-1c7461e9d3d8","Type":"ContainerDied","Data":"821ae759a5ba32197a93051b435e6f01263ceab4bc6d3d77eccce527b39b8143"} Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.663708 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" event={"ID":"2584fed3-16a4-489e-bf03-1c7461e9d3d8","Type":"ContainerStarted","Data":"31b96436742b6e2f2b15881309695bfa289fff7a6145248f7da06333643cac1d"} Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.667001 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a220bc57-3f31-4851-ad5c-9f61359f7de5","Type":"ContainerDied","Data":"2e7a3617c56f7d96899fe573b10e0613d97d13afc38ab2fb9c62813a642860d3"} Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.667046 4810 scope.go:117] "RemoveContainer" containerID="e5ac3906ff8a232fe91505ae472e019caa241a902c02da4ff118ba00b0e5d016" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.667061 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.672289 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s5488-config-vj8mr" event={"ID":"cd94f6ba-17f4-407d-96ce-aafc0d390ef0","Type":"ContainerDied","Data":"845fa0b026213cafa08bcde7439eb4aca0fbbbffe9ee97506b610230e0f93d27"} Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.672345 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="845fa0b026213cafa08bcde7439eb4aca0fbbbffe9ee97506b610230e0f93d27" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.672414 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5488-config-vj8mr" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.675907 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kpf4t" event={"ID":"31093793-65b6-467c-8d5b-218e108fd330","Type":"ContainerStarted","Data":"f13c00b75444d82ae151313db252a559d67eb3a9e93fc91fd59fa886fe8ada73"} Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.692589 4810 scope.go:117] "RemoveContainer" containerID="96840091b1cea0064312675928d8c3948ff53222c6323509a84e0412a49c9891" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.710096 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.719635 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.729094 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kpf4t" podStartSLOduration=2.641325275 podStartE2EDuration="16.729076827s" podCreationTimestamp="2026-02-19 15:28:05 +0000 UTC" firstStartedPulling="2026-02-19 15:28:06.884839077 +0000 UTC m=+1116.366869201" lastFinishedPulling="2026-02-19 15:28:20.972590619 +0000 UTC m=+1130.454620753" observedRunningTime="2026-02-19 15:28:21.721735297 +0000 UTC m=+1131.203765431" watchObservedRunningTime="2026-02-19 15:28:21.729076827 +0000 UTC m=+1131.211106951" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.741645 4810 scope.go:117] "RemoveContainer" containerID="e279b42557dcc4bac021262be680408c24c0a74b26422b2a204f61141627de2f" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.743985 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:28:21 crc kubenswrapper[4810]: E0219 15:28:21.745064 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd94f6ba-17f4-407d-96ce-aafc0d390ef0" containerName="ovn-config" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.745087 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd94f6ba-17f4-407d-96ce-aafc0d390ef0" containerName="ovn-config" Feb 19 15:28:21 crc kubenswrapper[4810]: E0219 15:28:21.745122 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="init-config-reloader" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.745131 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="init-config-reloader" Feb 19 15:28:21 crc kubenswrapper[4810]: E0219 15:28:21.745146 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="thanos-sidecar" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.745154 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="thanos-sidecar" Feb 19 15:28:21 crc kubenswrapper[4810]: E0219 15:28:21.745176 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="config-reloader" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.745184 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="config-reloader" Feb 19 15:28:21 crc kubenswrapper[4810]: E0219 15:28:21.745200 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="prometheus" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.745209 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="prometheus" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.745421 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="config-reloader" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.745436 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="thanos-sidecar" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.745466 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd94f6ba-17f4-407d-96ce-aafc0d390ef0" containerName="ovn-config" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.745484 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" containerName="prometheus" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.747470 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.751857 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.752129 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.752497 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-x7hn6" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.752659 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.752825 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.753158 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.754582 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.755417 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.762162 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.776324 4810 scope.go:117] "RemoveContainer" containerID="024e6cbe9539fc73096ba26de873a5cd3591fc2cd12d0be8dd23110aaa2f3ec6" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.779151 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.826127 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c213a3a-78fd-4b42-bc1c-e09837eae684-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.826565 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmlm6\" (UniqueName: \"kubernetes.io/projected/5c213a3a-78fd-4b42-bc1c-e09837eae684-kube-api-access-zmlm6\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.826673 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.826765 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.826851 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.826930 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.827030 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.827200 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.827250 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.827277 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.827298 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c213a3a-78fd-4b42-bc1c-e09837eae684-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.827340 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-config\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.827563 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.928741 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-config\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.928836 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.928861 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c213a3a-78fd-4b42-bc1c-e09837eae684-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.928881 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmlm6\" (UniqueName: \"kubernetes.io/projected/5c213a3a-78fd-4b42-bc1c-e09837eae684-kube-api-access-zmlm6\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.928907 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.928928 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.928953 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.928974 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.929026 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.929057 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.929076 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.929092 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.929107 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c213a3a-78fd-4b42-bc1c-e09837eae684-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.931030 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.931314 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.931818 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.935509 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c213a3a-78fd-4b42-bc1c-e09837eae684-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.936405 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c213a3a-78fd-4b42-bc1c-e09837eae684-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.936445 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.936540 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-config\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.936534 4810 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.936850 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e63bc62ea909687cb5abb0c5cf8da7008d795f1441aaff1987b707a42a388027/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.938381 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.944000 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.945608 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.945888 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.947654 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmlm6\" (UniqueName: \"kubernetes.io/projected/5c213a3a-78fd-4b42-bc1c-e09837eae684-kube-api-access-zmlm6\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:21 crc kubenswrapper[4810]: I0219 15:28:21.991985 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"prometheus-metric-storage-0\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.065027 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.118918 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-s5488-config-vj8mr"] Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.139966 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-s5488-config-vj8mr"] Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.175427 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s5488-config-lsj57"] Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.176752 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.181500 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.186553 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s5488-config-lsj57"] Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.236114 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-scripts\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.236162 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-run\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.236249 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-run-ovn\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.236316 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-additional-scripts\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.236589 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-log-ovn\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.236630 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbz2r\" (UniqueName: \"kubernetes.io/projected/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-kube-api-access-mbz2r\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.338380 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-run-ovn\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.338436 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-additional-scripts\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.338458 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-log-ovn\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.338487 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbz2r\" (UniqueName: \"kubernetes.io/projected/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-kube-api-access-mbz2r\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.338527 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-scripts\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.338547 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-run\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.338832 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-run\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.338878 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-run-ovn\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.339567 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-additional-scripts\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.339611 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-log-ovn\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.341320 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-scripts\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.355031 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbz2r\" (UniqueName: \"kubernetes.io/projected/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-kube-api-access-mbz2r\") pod \"ovn-controller-s5488-config-lsj57\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.507468 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.572243 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:28:22 crc kubenswrapper[4810]: W0219 15:28:22.572637 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c213a3a_78fd_4b42_bc1c_e09837eae684.slice/crio-971051d9ba9c03f03793d16fa10f74ed254c90779c474f368dd54279d38af75a WatchSource:0}: Error finding container 971051d9ba9c03f03793d16fa10f74ed254c90779c474f368dd54279d38af75a: Status 404 returned error can't find the container with id 971051d9ba9c03f03793d16fa10f74ed254c90779c474f368dd54279d38af75a Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.697614 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5c213a3a-78fd-4b42-bc1c-e09837eae684","Type":"ContainerStarted","Data":"971051d9ba9c03f03793d16fa10f74ed254c90779c474f368dd54279d38af75a"} Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.702413 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" event={"ID":"2584fed3-16a4-489e-bf03-1c7461e9d3d8","Type":"ContainerStarted","Data":"66285c6ba52e055f610f9273f522d8c7987aa81e9fe271b980a52b4e8dbd8794"} Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.702626 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.724940 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" podStartSLOduration=10.724908962 podStartE2EDuration="10.724908962s" podCreationTimestamp="2026-02-19 15:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:22.720025762 +0000 UTC m=+1132.202055886" watchObservedRunningTime="2026-02-19 15:28:22.724908962 +0000 UTC m=+1132.206939086" Feb 19 15:28:22 crc kubenswrapper[4810]: I0219 15:28:22.980946 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s5488-config-lsj57"] Feb 19 15:28:23 crc kubenswrapper[4810]: I0219 15:28:23.450360 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a220bc57-3f31-4851-ad5c-9f61359f7de5" path="/var/lib/kubelet/pods/a220bc57-3f31-4851-ad5c-9f61359f7de5/volumes" Feb 19 15:28:23 crc kubenswrapper[4810]: I0219 15:28:23.451434 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd94f6ba-17f4-407d-96ce-aafc0d390ef0" path="/var/lib/kubelet/pods/cd94f6ba-17f4-407d-96ce-aafc0d390ef0/volumes" Feb 19 15:28:23 crc kubenswrapper[4810]: I0219 15:28:23.712501 4810 generic.go:334] "Generic (PLEG): container finished" podID="89fe19f5-3c0f-4fa5-8eed-59bd7b76f900" containerID="930ffde39b4c3d7913e11cb429594ee8cd480971fa345e4d0d06f707706f3472" exitCode=0 Feb 19 15:28:23 crc kubenswrapper[4810]: I0219 15:28:23.712592 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s5488-config-lsj57" event={"ID":"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900","Type":"ContainerDied","Data":"930ffde39b4c3d7913e11cb429594ee8cd480971fa345e4d0d06f707706f3472"} Feb 19 15:28:23 crc kubenswrapper[4810]: I0219 15:28:23.712652 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s5488-config-lsj57" event={"ID":"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900","Type":"ContainerStarted","Data":"710b255712c252a63884f7d5e66f38e1321cdca52c846bde3a6059ddb76b7537"} Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.053769 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.080164 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-additional-scripts\") pod \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.080241 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-run\") pod \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.080270 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-log-ovn\") pod \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.080346 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-scripts\") pod \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.080471 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbz2r\" (UniqueName: \"kubernetes.io/projected/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-kube-api-access-mbz2r\") pod \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.080509 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-run-ovn\") pod \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\" (UID: \"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900\") " Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.080734 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "89fe19f5-3c0f-4fa5-8eed-59bd7b76f900" (UID: "89fe19f5-3c0f-4fa5-8eed-59bd7b76f900"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.080797 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "89fe19f5-3c0f-4fa5-8eed-59bd7b76f900" (UID: "89fe19f5-3c0f-4fa5-8eed-59bd7b76f900"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.080941 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "89fe19f5-3c0f-4fa5-8eed-59bd7b76f900" (UID: "89fe19f5-3c0f-4fa5-8eed-59bd7b76f900"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.080987 4810 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.081752 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-scripts" (OuterVolumeSpecName: "scripts") pod "89fe19f5-3c0f-4fa5-8eed-59bd7b76f900" (UID: "89fe19f5-3c0f-4fa5-8eed-59bd7b76f900"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.081976 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-run" (OuterVolumeSpecName: "var-run") pod "89fe19f5-3c0f-4fa5-8eed-59bd7b76f900" (UID: "89fe19f5-3c0f-4fa5-8eed-59bd7b76f900"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.088289 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-kube-api-access-mbz2r" (OuterVolumeSpecName: "kube-api-access-mbz2r") pod "89fe19f5-3c0f-4fa5-8eed-59bd7b76f900" (UID: "89fe19f5-3c0f-4fa5-8eed-59bd7b76f900"). InnerVolumeSpecName "kube-api-access-mbz2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.182265 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbz2r\" (UniqueName: \"kubernetes.io/projected/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-kube-api-access-mbz2r\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.182309 4810 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.182338 4810 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.182353 4810 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.182366 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.752706 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5c213a3a-78fd-4b42-bc1c-e09837eae684","Type":"ContainerStarted","Data":"f004bfd72f8d26f3aaf22a2c6561639f20d7d6d9b961893f534ded590f3c40ec"} Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.756507 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s5488-config-lsj57" event={"ID":"89fe19f5-3c0f-4fa5-8eed-59bd7b76f900","Type":"ContainerDied","Data":"710b255712c252a63884f7d5e66f38e1321cdca52c846bde3a6059ddb76b7537"} Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.756718 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="710b255712c252a63884f7d5e66f38e1321cdca52c846bde3a6059ddb76b7537" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.756587 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s5488-config-lsj57" Feb 19 15:28:25 crc kubenswrapper[4810]: I0219 15:28:25.885691 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Feb 19 15:28:26 crc kubenswrapper[4810]: I0219 15:28:26.133715 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 15:28:26 crc kubenswrapper[4810]: I0219 15:28:26.139389 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-s5488-config-lsj57"] Feb 19 15:28:26 crc kubenswrapper[4810]: I0219 15:28:26.152868 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-s5488-config-lsj57"] Feb 19 15:28:26 crc kubenswrapper[4810]: I0219 15:28:26.444551 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.448638 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89fe19f5-3c0f-4fa5-8eed-59bd7b76f900" path="/var/lib/kubelet/pods/89fe19f5-3c0f-4fa5-8eed-59bd7b76f900/volumes" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.646642 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-9wtnf"] Feb 19 15:28:27 crc kubenswrapper[4810]: E0219 15:28:27.646980 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fe19f5-3c0f-4fa5-8eed-59bd7b76f900" containerName="ovn-config" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.646997 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fe19f5-3c0f-4fa5-8eed-59bd7b76f900" containerName="ovn-config" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.647162 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fe19f5-3c0f-4fa5-8eed-59bd7b76f900" containerName="ovn-config" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.647748 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9wtnf" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.669304 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9wtnf"] Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.746734 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-k9zsz"] Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.748029 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k9zsz" Feb 19 15:28:27 crc kubenswrapper[4810]: W0219 15:28:27.754268 4810 reflector.go:561] object-"openstack"/"keystone-keystone-dockercfg-j78zz": failed to list *v1.Secret: secrets "keystone-keystone-dockercfg-j78zz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 19 15:28:27 crc kubenswrapper[4810]: E0219 15:28:27.754550 4810 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-keystone-dockercfg-j78zz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone-keystone-dockercfg-j78zz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 15:28:27 crc kubenswrapper[4810]: W0219 15:28:27.754611 4810 reflector.go:561] object-"openstack"/"keystone": failed to list *v1.Secret: secrets "keystone" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 19 15:28:27 crc kubenswrapper[4810]: E0219 15:28:27.754638 4810 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 15:28:27 crc kubenswrapper[4810]: W0219 15:28:27.755107 4810 reflector.go:561] object-"openstack"/"keystone-scripts": failed to list *v1.Secret: secrets "keystone-scripts" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 19 15:28:27 crc kubenswrapper[4810]: E0219 15:28:27.755132 4810 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 15:28:27 crc kubenswrapper[4810]: W0219 15:28:27.755634 4810 reflector.go:561] object-"openstack"/"keystone-config-data": failed to list *v1.Secret: secrets "keystone-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 19 15:28:27 crc kubenswrapper[4810]: E0219 15:28:27.755660 4810 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.823522 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-k9zsz"] Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.830579 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6217aad-07e6-49b6-8e80-41e75cecaaf5-operator-scripts\") pod \"barbican-db-create-9wtnf\" (UID: \"a6217aad-07e6-49b6-8e80-41e75cecaaf5\") " pod="openstack/barbican-db-create-9wtnf" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.830631 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxq5k\" (UniqueName: \"kubernetes.io/projected/a6217aad-07e6-49b6-8e80-41e75cecaaf5-kube-api-access-lxq5k\") pod \"barbican-db-create-9wtnf\" (UID: \"a6217aad-07e6-49b6-8e80-41e75cecaaf5\") " pod="openstack/barbican-db-create-9wtnf" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.857913 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8ce6-account-create-update-ztxw9"] Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.860373 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8ce6-account-create-update-ztxw9" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.867441 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.868288 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8ce6-account-create-update-ztxw9"] Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.932480 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxq5k\" (UniqueName: \"kubernetes.io/projected/a6217aad-07e6-49b6-8e80-41e75cecaaf5-kube-api-access-lxq5k\") pod \"barbican-db-create-9wtnf\" (UID: \"a6217aad-07e6-49b6-8e80-41e75cecaaf5\") " pod="openstack/barbican-db-create-9wtnf" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.932952 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082fc735-2850-452d-841a-0af9ed7ed171-config-data\") pod \"keystone-db-sync-k9zsz\" (UID: \"082fc735-2850-452d-841a-0af9ed7ed171\") " pod="openstack/keystone-db-sync-k9zsz" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.933180 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082fc735-2850-452d-841a-0af9ed7ed171-combined-ca-bundle\") pod \"keystone-db-sync-k9zsz\" (UID: \"082fc735-2850-452d-841a-0af9ed7ed171\") " pod="openstack/keystone-db-sync-k9zsz" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.933210 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4qlb\" (UniqueName: \"kubernetes.io/projected/082fc735-2850-452d-841a-0af9ed7ed171-kube-api-access-t4qlb\") pod \"keystone-db-sync-k9zsz\" (UID: \"082fc735-2850-452d-841a-0af9ed7ed171\") " pod="openstack/keystone-db-sync-k9zsz" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.933423 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6217aad-07e6-49b6-8e80-41e75cecaaf5-operator-scripts\") pod \"barbican-db-create-9wtnf\" (UID: \"a6217aad-07e6-49b6-8e80-41e75cecaaf5\") " pod="openstack/barbican-db-create-9wtnf" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.933768 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ptwwk"] Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.934294 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6217aad-07e6-49b6-8e80-41e75cecaaf5-operator-scripts\") pod \"barbican-db-create-9wtnf\" (UID: \"a6217aad-07e6-49b6-8e80-41e75cecaaf5\") " pod="openstack/barbican-db-create-9wtnf" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.934805 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ptwwk" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.957714 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ptwwk"] Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.968835 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxq5k\" (UniqueName: \"kubernetes.io/projected/a6217aad-07e6-49b6-8e80-41e75cecaaf5-kube-api-access-lxq5k\") pod \"barbican-db-create-9wtnf\" (UID: \"a6217aad-07e6-49b6-8e80-41e75cecaaf5\") " pod="openstack/barbican-db-create-9wtnf" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.975715 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-sd4lr"] Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.976933 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.981156 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.981687 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-ngb4n" Feb 19 15:28:27 crc kubenswrapper[4810]: I0219 15:28:27.984430 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-sd4lr"] Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.035540 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqz6c\" (UniqueName: \"kubernetes.io/projected/5ac347c6-4f1b-4b05-87a0-9332dec2ba9d-kube-api-access-tqz6c\") pod \"barbican-8ce6-account-create-update-ztxw9\" (UID: \"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d\") " pod="openstack/barbican-8ce6-account-create-update-ztxw9" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.035620 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082fc735-2850-452d-841a-0af9ed7ed171-combined-ca-bundle\") pod \"keystone-db-sync-k9zsz\" (UID: \"082fc735-2850-452d-841a-0af9ed7ed171\") " pod="openstack/keystone-db-sync-k9zsz" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.035641 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4qlb\" (UniqueName: \"kubernetes.io/projected/082fc735-2850-452d-841a-0af9ed7ed171-kube-api-access-t4qlb\") pod \"keystone-db-sync-k9zsz\" (UID: \"082fc735-2850-452d-841a-0af9ed7ed171\") " pod="openstack/keystone-db-sync-k9zsz" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.035671 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac347c6-4f1b-4b05-87a0-9332dec2ba9d-operator-scripts\") pod \"barbican-8ce6-account-create-update-ztxw9\" (UID: \"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d\") " pod="openstack/barbican-8ce6-account-create-update-ztxw9" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.035836 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2c7665c-330a-45b8-b461-bd08b069b747-operator-scripts\") pod \"neutron-db-create-ptwwk\" (UID: \"c2c7665c-330a-45b8-b461-bd08b069b747\") " pod="openstack/neutron-db-create-ptwwk" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.035872 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z7f9\" (UniqueName: \"kubernetes.io/projected/c2c7665c-330a-45b8-b461-bd08b069b747-kube-api-access-8z7f9\") pod \"neutron-db-create-ptwwk\" (UID: \"c2c7665c-330a-45b8-b461-bd08b069b747\") " pod="openstack/neutron-db-create-ptwwk" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.035936 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082fc735-2850-452d-841a-0af9ed7ed171-config-data\") pod \"keystone-db-sync-k9zsz\" (UID: \"082fc735-2850-452d-841a-0af9ed7ed171\") " pod="openstack/keystone-db-sync-k9zsz" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.041636 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082fc735-2850-452d-841a-0af9ed7ed171-combined-ca-bundle\") pod \"keystone-db-sync-k9zsz\" (UID: \"082fc735-2850-452d-841a-0af9ed7ed171\") " pod="openstack/keystone-db-sync-k9zsz" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.068585 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4qlb\" (UniqueName: \"kubernetes.io/projected/082fc735-2850-452d-841a-0af9ed7ed171-kube-api-access-t4qlb\") pod \"keystone-db-sync-k9zsz\" (UID: \"082fc735-2850-452d-841a-0af9ed7ed171\") " pod="openstack/keystone-db-sync-k9zsz" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.076284 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dc04-account-create-update-dmf9z"] Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.077502 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc04-account-create-update-dmf9z" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.079862 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.084386 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dc04-account-create-update-dmf9z"] Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.139687 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnkbf\" (UniqueName: \"kubernetes.io/projected/63eeb47c-9c4a-4e36-be24-61c126517600-kube-api-access-mnkbf\") pod \"watcher-db-sync-sd4lr\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.139740 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqz6c\" (UniqueName: \"kubernetes.io/projected/5ac347c6-4f1b-4b05-87a0-9332dec2ba9d-kube-api-access-tqz6c\") pod \"barbican-8ce6-account-create-update-ztxw9\" (UID: \"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d\") " pod="openstack/barbican-8ce6-account-create-update-ztxw9" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.139803 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac347c6-4f1b-4b05-87a0-9332dec2ba9d-operator-scripts\") pod \"barbican-8ce6-account-create-update-ztxw9\" (UID: \"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d\") " pod="openstack/barbican-8ce6-account-create-update-ztxw9" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.139825 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2c7665c-330a-45b8-b461-bd08b069b747-operator-scripts\") pod \"neutron-db-create-ptwwk\" (UID: \"c2c7665c-330a-45b8-b461-bd08b069b747\") " pod="openstack/neutron-db-create-ptwwk" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.139850 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z7f9\" (UniqueName: \"kubernetes.io/projected/c2c7665c-330a-45b8-b461-bd08b069b747-kube-api-access-8z7f9\") pod \"neutron-db-create-ptwwk\" (UID: \"c2c7665c-330a-45b8-b461-bd08b069b747\") " pod="openstack/neutron-db-create-ptwwk" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.139875 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-combined-ca-bundle\") pod \"watcher-db-sync-sd4lr\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.139913 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-db-sync-config-data\") pod \"watcher-db-sync-sd4lr\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.139951 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-config-data\") pod \"watcher-db-sync-sd4lr\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.141039 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2c7665c-330a-45b8-b461-bd08b069b747-operator-scripts\") pod \"neutron-db-create-ptwwk\" (UID: \"c2c7665c-330a-45b8-b461-bd08b069b747\") " pod="openstack/neutron-db-create-ptwwk" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.141674 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac347c6-4f1b-4b05-87a0-9332dec2ba9d-operator-scripts\") pod \"barbican-8ce6-account-create-update-ztxw9\" (UID: \"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d\") " pod="openstack/barbican-8ce6-account-create-update-ztxw9" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.147798 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-9cp2h"] Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.150839 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9cp2h" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.156778 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9cp2h"] Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.210870 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z7f9\" (UniqueName: \"kubernetes.io/projected/c2c7665c-330a-45b8-b461-bd08b069b747-kube-api-access-8z7f9\") pod \"neutron-db-create-ptwwk\" (UID: \"c2c7665c-330a-45b8-b461-bd08b069b747\") " pod="openstack/neutron-db-create-ptwwk" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.211559 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqz6c\" (UniqueName: \"kubernetes.io/projected/5ac347c6-4f1b-4b05-87a0-9332dec2ba9d-kube-api-access-tqz6c\") pod \"barbican-8ce6-account-create-update-ztxw9\" (UID: \"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d\") " pod="openstack/barbican-8ce6-account-create-update-ztxw9" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.241054 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700fd144-e077-4468-80a4-f131fdb9d67e-operator-scripts\") pod \"cinder-db-create-9cp2h\" (UID: \"700fd144-e077-4468-80a4-f131fdb9d67e\") " pod="openstack/cinder-db-create-9cp2h" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.241115 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzqqn\" (UniqueName: \"kubernetes.io/projected/05b15da5-701a-492a-b986-99b767d2876c-kube-api-access-vzqqn\") pod \"neutron-dc04-account-create-update-dmf9z\" (UID: \"05b15da5-701a-492a-b986-99b767d2876c\") " pod="openstack/neutron-dc04-account-create-update-dmf9z" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.241153 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psgfb\" (UniqueName: \"kubernetes.io/projected/700fd144-e077-4468-80a4-f131fdb9d67e-kube-api-access-psgfb\") pod \"cinder-db-create-9cp2h\" (UID: \"700fd144-e077-4468-80a4-f131fdb9d67e\") " pod="openstack/cinder-db-create-9cp2h" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.241193 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-combined-ca-bundle\") pod \"watcher-db-sync-sd4lr\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.241224 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05b15da5-701a-492a-b986-99b767d2876c-operator-scripts\") pod \"neutron-dc04-account-create-update-dmf9z\" (UID: \"05b15da5-701a-492a-b986-99b767d2876c\") " pod="openstack/neutron-dc04-account-create-update-dmf9z" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.241243 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-db-sync-config-data\") pod \"watcher-db-sync-sd4lr\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.241277 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-config-data\") pod \"watcher-db-sync-sd4lr\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.241312 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnkbf\" (UniqueName: \"kubernetes.io/projected/63eeb47c-9c4a-4e36-be24-61c126517600-kube-api-access-mnkbf\") pod \"watcher-db-sync-sd4lr\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.244961 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-combined-ca-bundle\") pod \"watcher-db-sync-sd4lr\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.248167 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ptwwk" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.254711 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-db-sync-config-data\") pod \"watcher-db-sync-sd4lr\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.256917 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-config-data\") pod \"watcher-db-sync-sd4lr\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.261934 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnkbf\" (UniqueName: \"kubernetes.io/projected/63eeb47c-9c4a-4e36-be24-61c126517600-kube-api-access-mnkbf\") pod \"watcher-db-sync-sd4lr\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.262299 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9wtnf" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.277480 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.347857 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.348361 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psgfb\" (UniqueName: \"kubernetes.io/projected/700fd144-e077-4468-80a4-f131fdb9d67e-kube-api-access-psgfb\") pod \"cinder-db-create-9cp2h\" (UID: \"700fd144-e077-4468-80a4-f131fdb9d67e\") " pod="openstack/cinder-db-create-9cp2h" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.348454 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05b15da5-701a-492a-b986-99b767d2876c-operator-scripts\") pod \"neutron-dc04-account-create-update-dmf9z\" (UID: \"05b15da5-701a-492a-b986-99b767d2876c\") " pod="openstack/neutron-dc04-account-create-update-dmf9z" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.348550 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700fd144-e077-4468-80a4-f131fdb9d67e-operator-scripts\") pod \"cinder-db-create-9cp2h\" (UID: \"700fd144-e077-4468-80a4-f131fdb9d67e\") " pod="openstack/cinder-db-create-9cp2h" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.348605 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzqqn\" (UniqueName: \"kubernetes.io/projected/05b15da5-701a-492a-b986-99b767d2876c-kube-api-access-vzqqn\") pod \"neutron-dc04-account-create-update-dmf9z\" (UID: \"05b15da5-701a-492a-b986-99b767d2876c\") " pod="openstack/neutron-dc04-account-create-update-dmf9z" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.349304 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05b15da5-701a-492a-b986-99b767d2876c-operator-scripts\") pod \"neutron-dc04-account-create-update-dmf9z\" (UID: \"05b15da5-701a-492a-b986-99b767d2876c\") " pod="openstack/neutron-dc04-account-create-update-dmf9z" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.350213 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700fd144-e077-4468-80a4-f131fdb9d67e-operator-scripts\") pod \"cinder-db-create-9cp2h\" (UID: \"700fd144-e077-4468-80a4-f131fdb9d67e\") " pod="openstack/cinder-db-create-9cp2h" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.353906 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d95ff5b97-flxcw"] Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.354127 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" podUID="788aae13-b274-4965-ac0c-8ac075c32567" containerName="dnsmasq-dns" containerID="cri-o://e901006a072b768e5b2fffb51122f3ae9427a4d8bce325b789cb52d5dc2df384" gracePeriod=10 Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.371662 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-43b3-account-create-update-6gqq5"] Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.373225 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-43b3-account-create-update-6gqq5" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.378935 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.378996 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzqqn\" (UniqueName: \"kubernetes.io/projected/05b15da5-701a-492a-b986-99b767d2876c-kube-api-access-vzqqn\") pod \"neutron-dc04-account-create-update-dmf9z\" (UID: \"05b15da5-701a-492a-b986-99b767d2876c\") " pod="openstack/neutron-dc04-account-create-update-dmf9z" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.408684 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psgfb\" (UniqueName: \"kubernetes.io/projected/700fd144-e077-4468-80a4-f131fdb9d67e-kube-api-access-psgfb\") pod \"cinder-db-create-9cp2h\" (UID: \"700fd144-e077-4468-80a4-f131fdb9d67e\") " pod="openstack/cinder-db-create-9cp2h" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.418120 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-43b3-account-create-update-6gqq5"] Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.429224 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc04-account-create-update-dmf9z" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.454034 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25jng\" (UniqueName: \"kubernetes.io/projected/5770188c-7480-4529-8450-3d1a44cf50d6-kube-api-access-25jng\") pod \"cinder-43b3-account-create-update-6gqq5\" (UID: \"5770188c-7480-4529-8450-3d1a44cf50d6\") " pod="openstack/cinder-43b3-account-create-update-6gqq5" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.455765 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5770188c-7480-4529-8450-3d1a44cf50d6-operator-scripts\") pod \"cinder-43b3-account-create-update-6gqq5\" (UID: \"5770188c-7480-4529-8450-3d1a44cf50d6\") " pod="openstack/cinder-43b3-account-create-update-6gqq5" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.481261 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8ce6-account-create-update-ztxw9" Feb 19 15:28:28 crc kubenswrapper[4810]: E0219 15:28:28.545445 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod788aae13_b274_4965_ac0c_8ac075c32567.slice/crio-conmon-e901006a072b768e5b2fffb51122f3ae9427a4d8bce325b789cb52d5dc2df384.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod788aae13_b274_4965_ac0c_8ac075c32567.slice/crio-e901006a072b768e5b2fffb51122f3ae9427a4d8bce325b789cb52d5dc2df384.scope\": RecentStats: unable to find data in memory cache]" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.560311 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25jng\" (UniqueName: \"kubernetes.io/projected/5770188c-7480-4529-8450-3d1a44cf50d6-kube-api-access-25jng\") pod \"cinder-43b3-account-create-update-6gqq5\" (UID: \"5770188c-7480-4529-8450-3d1a44cf50d6\") " pod="openstack/cinder-43b3-account-create-update-6gqq5" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.560460 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5770188c-7480-4529-8450-3d1a44cf50d6-operator-scripts\") pod \"cinder-43b3-account-create-update-6gqq5\" (UID: \"5770188c-7480-4529-8450-3d1a44cf50d6\") " pod="openstack/cinder-43b3-account-create-update-6gqq5" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.561677 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5770188c-7480-4529-8450-3d1a44cf50d6-operator-scripts\") pod \"cinder-43b3-account-create-update-6gqq5\" (UID: \"5770188c-7480-4529-8450-3d1a44cf50d6\") " pod="openstack/cinder-43b3-account-create-update-6gqq5" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.576954 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25jng\" (UniqueName: \"kubernetes.io/projected/5770188c-7480-4529-8450-3d1a44cf50d6-kube-api-access-25jng\") pod \"cinder-43b3-account-create-update-6gqq5\" (UID: \"5770188c-7480-4529-8450-3d1a44cf50d6\") " pod="openstack/cinder-43b3-account-create-update-6gqq5" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.587079 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.605656 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9cp2h" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.660782 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.700143 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-43b3-account-create-update-6gqq5" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.730340 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 15:28:28 crc kubenswrapper[4810]: E0219 15:28:28.736957 4810 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.162:55748->38.102.83.162:41765: write tcp 38.102.83.162:55748->38.102.83.162:41765: write: broken pipe Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.740549 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082fc735-2850-452d-841a-0af9ed7ed171-config-data\") pod \"keystone-db-sync-k9zsz\" (UID: \"082fc735-2850-452d-841a-0af9ed7ed171\") " pod="openstack/keystone-db-sync-k9zsz" Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.786456 4810 generic.go:334] "Generic (PLEG): container finished" podID="788aae13-b274-4965-ac0c-8ac075c32567" containerID="e901006a072b768e5b2fffb51122f3ae9427a4d8bce325b789cb52d5dc2df384" exitCode=0 Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.786499 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" event={"ID":"788aae13-b274-4965-ac0c-8ac075c32567","Type":"ContainerDied","Data":"e901006a072b768e5b2fffb51122f3ae9427a4d8bce325b789cb52d5dc2df384"} Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.829289 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ptwwk"] Feb 19 15:28:28 crc kubenswrapper[4810]: I0219 15:28:28.878881 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9wtnf"] Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.002838 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-sd4lr"] Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.139820 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j78zz" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.144078 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k9zsz" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.160424 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.172147 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dc04-account-create-update-dmf9z"] Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.201042 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8ce6-account-create-update-ztxw9"] Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.273902 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-ovsdbserver-nb\") pod \"788aae13-b274-4965-ac0c-8ac075c32567\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.274291 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-config\") pod \"788aae13-b274-4965-ac0c-8ac075c32567\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.274364 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-dns-svc\") pod \"788aae13-b274-4965-ac0c-8ac075c32567\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.274401 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d7lt\" (UniqueName: \"kubernetes.io/projected/788aae13-b274-4965-ac0c-8ac075c32567-kube-api-access-9d7lt\") pod \"788aae13-b274-4965-ac0c-8ac075c32567\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.274490 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-ovsdbserver-sb\") pod \"788aae13-b274-4965-ac0c-8ac075c32567\" (UID: \"788aae13-b274-4965-ac0c-8ac075c32567\") " Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.295990 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/788aae13-b274-4965-ac0c-8ac075c32567-kube-api-access-9d7lt" (OuterVolumeSpecName: "kube-api-access-9d7lt") pod "788aae13-b274-4965-ac0c-8ac075c32567" (UID: "788aae13-b274-4965-ac0c-8ac075c32567"). InnerVolumeSpecName "kube-api-access-9d7lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.330953 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9cp2h"] Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.376207 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d7lt\" (UniqueName: \"kubernetes.io/projected/788aae13-b274-4965-ac0c-8ac075c32567-kube-api-access-9d7lt\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.465126 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "788aae13-b274-4965-ac0c-8ac075c32567" (UID: "788aae13-b274-4965-ac0c-8ac075c32567"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.477626 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.494371 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "788aae13-b274-4965-ac0c-8ac075c32567" (UID: "788aae13-b274-4965-ac0c-8ac075c32567"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.496196 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-43b3-account-create-update-6gqq5"] Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.557660 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-config" (OuterVolumeSpecName: "config") pod "788aae13-b274-4965-ac0c-8ac075c32567" (UID: "788aae13-b274-4965-ac0c-8ac075c32567"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.579184 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.579214 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.596282 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-k9zsz"] Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.602919 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "788aae13-b274-4965-ac0c-8ac075c32567" (UID: "788aae13-b274-4965-ac0c-8ac075c32567"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:29 crc kubenswrapper[4810]: W0219 15:28:29.613636 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod082fc735_2850_452d_841a_0af9ed7ed171.slice/crio-866dbef12abd89bdc5dcdad8828a3677ff57d6665594483a4a2f0d6d9a3ec62c WatchSource:0}: Error finding container 866dbef12abd89bdc5dcdad8828a3677ff57d6665594483a4a2f0d6d9a3ec62c: Status 404 returned error can't find the container with id 866dbef12abd89bdc5dcdad8828a3677ff57d6665594483a4a2f0d6d9a3ec62c Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.680253 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/788aae13-b274-4965-ac0c-8ac075c32567-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.817703 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-sd4lr" event={"ID":"63eeb47c-9c4a-4e36-be24-61c126517600","Type":"ContainerStarted","Data":"490bfe3163d0e855ea419f13cf8a5bebf9306234f76375e3967a08a8c33723b7"} Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.822503 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ptwwk" event={"ID":"c2c7665c-330a-45b8-b461-bd08b069b747","Type":"ContainerStarted","Data":"f1368987940841c90c15fd62f94998dda89194fc11c576179de536979c6adc82"} Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.822541 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ptwwk" event={"ID":"c2c7665c-330a-45b8-b461-bd08b069b747","Type":"ContainerStarted","Data":"fbfaae03009c8948f491fee5a2a00dca9fa1a0a5f93189fe34e02d8b5cd0dc77"} Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.827112 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-43b3-account-create-update-6gqq5" event={"ID":"5770188c-7480-4529-8450-3d1a44cf50d6","Type":"ContainerStarted","Data":"a57eff1ff8b65e63b3a00f379e6a439481a3517ce81fa871e65130447c5a498b"} Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.829688 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc04-account-create-update-dmf9z" event={"ID":"05b15da5-701a-492a-b986-99b767d2876c","Type":"ContainerStarted","Data":"e7fde16e8762a1bda91d4aa8dbb2846cb1cc1e9049ca68a05b5713d4c512a4b8"} Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.829720 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc04-account-create-update-dmf9z" event={"ID":"05b15da5-701a-492a-b986-99b767d2876c","Type":"ContainerStarted","Data":"8e600340158d54f5ebb4c03dc507ad12f985664e178179ccd963dddb0bb69620"} Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.831703 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8ce6-account-create-update-ztxw9" event={"ID":"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d","Type":"ContainerStarted","Data":"58aa0a5c9039d1f2e2e2fa3a8464a2460e5ec3924d291301dda66cb2085c37f2"} Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.831732 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8ce6-account-create-update-ztxw9" event={"ID":"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d","Type":"ContainerStarted","Data":"3b7cd5c39591003c8d8b9a8e7e5c5a0c7eb4fe26dcbaf6fdb194cc285fd78923"} Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.835193 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k9zsz" event={"ID":"082fc735-2850-452d-841a-0af9ed7ed171","Type":"ContainerStarted","Data":"866dbef12abd89bdc5dcdad8828a3677ff57d6665594483a4a2f0d6d9a3ec62c"} Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.838165 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-ptwwk" podStartSLOduration=2.83814776 podStartE2EDuration="2.83814776s" podCreationTimestamp="2026-02-19 15:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:29.836408678 +0000 UTC m=+1139.318438802" watchObservedRunningTime="2026-02-19 15:28:29.83814776 +0000 UTC m=+1139.320177884" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.840958 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" event={"ID":"788aae13-b274-4965-ac0c-8ac075c32567","Type":"ContainerDied","Data":"7cd4fcbb128b34896f354e231a1de0a145b80016e5c0592bfefbc4361bfadbaf"} Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.841420 4810 scope.go:117] "RemoveContainer" containerID="e901006a072b768e5b2fffb51122f3ae9427a4d8bce325b789cb52d5dc2df384" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.841490 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95ff5b97-flxcw" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.846049 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9cp2h" event={"ID":"700fd144-e077-4468-80a4-f131fdb9d67e","Type":"ContainerStarted","Data":"03a5a65957944d9b807e9bb77848eeeb10aa95967ed1b4ac444e66a5d8f6c6f2"} Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.854045 4810 generic.go:334] "Generic (PLEG): container finished" podID="a6217aad-07e6-49b6-8e80-41e75cecaaf5" containerID="317f50bb910bab31d9c1242a97f9988671bee73e88d7e795833b2626793ec0c6" exitCode=0 Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.854098 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9wtnf" event={"ID":"a6217aad-07e6-49b6-8e80-41e75cecaaf5","Type":"ContainerDied","Data":"317f50bb910bab31d9c1242a97f9988671bee73e88d7e795833b2626793ec0c6"} Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.854123 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9wtnf" event={"ID":"a6217aad-07e6-49b6-8e80-41e75cecaaf5","Type":"ContainerStarted","Data":"6e01ddf79bfd9efeeb1dcc46b06ded16ef7b0034b875cc74ad588200086084a8"} Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.869393 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dc04-account-create-update-dmf9z" podStartSLOduration=1.869377955 podStartE2EDuration="1.869377955s" podCreationTimestamp="2026-02-19 15:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:29.865453119 +0000 UTC m=+1139.347483243" watchObservedRunningTime="2026-02-19 15:28:29.869377955 +0000 UTC m=+1139.351408079" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.873624 4810 scope.go:117] "RemoveContainer" containerID="28ca2a964c46491b21497e9a884496f03f8a4889da795bda4b42a303c67c82ef" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.880615 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-8ce6-account-create-update-ztxw9" podStartSLOduration=2.880595389 podStartE2EDuration="2.880595389s" podCreationTimestamp="2026-02-19 15:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:29.880105287 +0000 UTC m=+1139.362135411" watchObservedRunningTime="2026-02-19 15:28:29.880595389 +0000 UTC m=+1139.362625513" Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.915272 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d95ff5b97-flxcw"] Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.929378 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d95ff5b97-flxcw"] Feb 19 15:28:29 crc kubenswrapper[4810]: I0219 15:28:29.951707 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-9cp2h" podStartSLOduration=1.95169003 podStartE2EDuration="1.95169003s" podCreationTimestamp="2026-02-19 15:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:29.936414436 +0000 UTC m=+1139.418444560" watchObservedRunningTime="2026-02-19 15:28:29.95169003 +0000 UTC m=+1139.433720144" Feb 19 15:28:30 crc kubenswrapper[4810]: I0219 15:28:30.864463 4810 generic.go:334] "Generic (PLEG): container finished" podID="31093793-65b6-467c-8d5b-218e108fd330" containerID="f13c00b75444d82ae151313db252a559d67eb3a9e93fc91fd59fa886fe8ada73" exitCode=0 Feb 19 15:28:30 crc kubenswrapper[4810]: I0219 15:28:30.864565 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kpf4t" event={"ID":"31093793-65b6-467c-8d5b-218e108fd330","Type":"ContainerDied","Data":"f13c00b75444d82ae151313db252a559d67eb3a9e93fc91fd59fa886fe8ada73"} Feb 19 15:28:30 crc kubenswrapper[4810]: I0219 15:28:30.866673 4810 generic.go:334] "Generic (PLEG): container finished" podID="05b15da5-701a-492a-b986-99b767d2876c" containerID="e7fde16e8762a1bda91d4aa8dbb2846cb1cc1e9049ca68a05b5713d4c512a4b8" exitCode=0 Feb 19 15:28:30 crc kubenswrapper[4810]: I0219 15:28:30.866727 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc04-account-create-update-dmf9z" event={"ID":"05b15da5-701a-492a-b986-99b767d2876c","Type":"ContainerDied","Data":"e7fde16e8762a1bda91d4aa8dbb2846cb1cc1e9049ca68a05b5713d4c512a4b8"} Feb 19 15:28:30 crc kubenswrapper[4810]: I0219 15:28:30.869311 4810 generic.go:334] "Generic (PLEG): container finished" podID="5ac347c6-4f1b-4b05-87a0-9332dec2ba9d" containerID="58aa0a5c9039d1f2e2e2fa3a8464a2460e5ec3924d291301dda66cb2085c37f2" exitCode=0 Feb 19 15:28:30 crc kubenswrapper[4810]: I0219 15:28:30.869390 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8ce6-account-create-update-ztxw9" event={"ID":"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d","Type":"ContainerDied","Data":"58aa0a5c9039d1f2e2e2fa3a8464a2460e5ec3924d291301dda66cb2085c37f2"} Feb 19 15:28:30 crc kubenswrapper[4810]: I0219 15:28:30.871489 4810 generic.go:334] "Generic (PLEG): container finished" podID="700fd144-e077-4468-80a4-f131fdb9d67e" containerID="9f4b973758b7ab4df4daae42709ff686161b77974aebe961605bcdd7b7ee6895" exitCode=0 Feb 19 15:28:30 crc kubenswrapper[4810]: I0219 15:28:30.871529 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9cp2h" event={"ID":"700fd144-e077-4468-80a4-f131fdb9d67e","Type":"ContainerDied","Data":"9f4b973758b7ab4df4daae42709ff686161b77974aebe961605bcdd7b7ee6895"} Feb 19 15:28:30 crc kubenswrapper[4810]: I0219 15:28:30.874252 4810 generic.go:334] "Generic (PLEG): container finished" podID="5770188c-7480-4529-8450-3d1a44cf50d6" containerID="235b187eb3bd473181cf1d8a9d02071a2d445e3841afa055bb60de833ffbcec1" exitCode=0 Feb 19 15:28:30 crc kubenswrapper[4810]: I0219 15:28:30.874292 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-43b3-account-create-update-6gqq5" event={"ID":"5770188c-7480-4529-8450-3d1a44cf50d6","Type":"ContainerDied","Data":"235b187eb3bd473181cf1d8a9d02071a2d445e3841afa055bb60de833ffbcec1"} Feb 19 15:28:30 crc kubenswrapper[4810]: I0219 15:28:30.880637 4810 generic.go:334] "Generic (PLEG): container finished" podID="c2c7665c-330a-45b8-b461-bd08b069b747" containerID="f1368987940841c90c15fd62f94998dda89194fc11c576179de536979c6adc82" exitCode=0 Feb 19 15:28:30 crc kubenswrapper[4810]: I0219 15:28:30.881136 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ptwwk" event={"ID":"c2c7665c-330a-45b8-b461-bd08b069b747","Type":"ContainerDied","Data":"f1368987940841c90c15fd62f94998dda89194fc11c576179de536979c6adc82"} Feb 19 15:28:31 crc kubenswrapper[4810]: I0219 15:28:31.229765 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9wtnf" Feb 19 15:28:31 crc kubenswrapper[4810]: I0219 15:28:31.307628 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6217aad-07e6-49b6-8e80-41e75cecaaf5-operator-scripts\") pod \"a6217aad-07e6-49b6-8e80-41e75cecaaf5\" (UID: \"a6217aad-07e6-49b6-8e80-41e75cecaaf5\") " Feb 19 15:28:31 crc kubenswrapper[4810]: I0219 15:28:31.307820 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxq5k\" (UniqueName: \"kubernetes.io/projected/a6217aad-07e6-49b6-8e80-41e75cecaaf5-kube-api-access-lxq5k\") pod \"a6217aad-07e6-49b6-8e80-41e75cecaaf5\" (UID: \"a6217aad-07e6-49b6-8e80-41e75cecaaf5\") " Feb 19 15:28:31 crc kubenswrapper[4810]: I0219 15:28:31.308426 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6217aad-07e6-49b6-8e80-41e75cecaaf5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6217aad-07e6-49b6-8e80-41e75cecaaf5" (UID: "a6217aad-07e6-49b6-8e80-41e75cecaaf5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:31 crc kubenswrapper[4810]: I0219 15:28:31.314313 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6217aad-07e6-49b6-8e80-41e75cecaaf5-kube-api-access-lxq5k" (OuterVolumeSpecName: "kube-api-access-lxq5k") pod "a6217aad-07e6-49b6-8e80-41e75cecaaf5" (UID: "a6217aad-07e6-49b6-8e80-41e75cecaaf5"). InnerVolumeSpecName "kube-api-access-lxq5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:31 crc kubenswrapper[4810]: I0219 15:28:31.409693 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6217aad-07e6-49b6-8e80-41e75cecaaf5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:31 crc kubenswrapper[4810]: I0219 15:28:31.409952 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxq5k\" (UniqueName: \"kubernetes.io/projected/a6217aad-07e6-49b6-8e80-41e75cecaaf5-kube-api-access-lxq5k\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:31 crc kubenswrapper[4810]: I0219 15:28:31.459468 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="788aae13-b274-4965-ac0c-8ac075c32567" path="/var/lib/kubelet/pods/788aae13-b274-4965-ac0c-8ac075c32567/volumes" Feb 19 15:28:31 crc kubenswrapper[4810]: I0219 15:28:31.895622 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9wtnf" event={"ID":"a6217aad-07e6-49b6-8e80-41e75cecaaf5","Type":"ContainerDied","Data":"6e01ddf79bfd9efeeb1dcc46b06ded16ef7b0034b875cc74ad588200086084a8"} Feb 19 15:28:31 crc kubenswrapper[4810]: I0219 15:28:31.895663 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e01ddf79bfd9efeeb1dcc46b06ded16ef7b0034b875cc74ad588200086084a8" Feb 19 15:28:31 crc kubenswrapper[4810]: I0219 15:28:31.895675 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9wtnf" Feb 19 15:28:31 crc kubenswrapper[4810]: I0219 15:28:31.899577 4810 generic.go:334] "Generic (PLEG): container finished" podID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerID="f004bfd72f8d26f3aaf22a2c6561639f20d7d6d9b961893f534ded590f3c40ec" exitCode=0 Feb 19 15:28:31 crc kubenswrapper[4810]: I0219 15:28:31.899717 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5c213a3a-78fd-4b42-bc1c-e09837eae684","Type":"ContainerDied","Data":"f004bfd72f8d26f3aaf22a2c6561639f20d7d6d9b961893f534ded590f3c40ec"} Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.177619 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9cp2h" Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.190852 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc04-account-create-update-dmf9z" Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.259391 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700fd144-e077-4468-80a4-f131fdb9d67e-operator-scripts\") pod \"700fd144-e077-4468-80a4-f131fdb9d67e\" (UID: \"700fd144-e077-4468-80a4-f131fdb9d67e\") " Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.259434 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05b15da5-701a-492a-b986-99b767d2876c-operator-scripts\") pod \"05b15da5-701a-492a-b986-99b767d2876c\" (UID: \"05b15da5-701a-492a-b986-99b767d2876c\") " Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.259631 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzqqn\" (UniqueName: \"kubernetes.io/projected/05b15da5-701a-492a-b986-99b767d2876c-kube-api-access-vzqqn\") pod \"05b15da5-701a-492a-b986-99b767d2876c\" (UID: \"05b15da5-701a-492a-b986-99b767d2876c\") " Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.259696 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psgfb\" (UniqueName: \"kubernetes.io/projected/700fd144-e077-4468-80a4-f131fdb9d67e-kube-api-access-psgfb\") pod \"700fd144-e077-4468-80a4-f131fdb9d67e\" (UID: \"700fd144-e077-4468-80a4-f131fdb9d67e\") " Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.260367 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/700fd144-e077-4468-80a4-f131fdb9d67e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "700fd144-e077-4468-80a4-f131fdb9d67e" (UID: "700fd144-e077-4468-80a4-f131fdb9d67e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.261209 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05b15da5-701a-492a-b986-99b767d2876c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05b15da5-701a-492a-b986-99b767d2876c" (UID: "05b15da5-701a-492a-b986-99b767d2876c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.265466 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b15da5-701a-492a-b986-99b767d2876c-kube-api-access-vzqqn" (OuterVolumeSpecName: "kube-api-access-vzqqn") pod "05b15da5-701a-492a-b986-99b767d2876c" (UID: "05b15da5-701a-492a-b986-99b767d2876c"). InnerVolumeSpecName "kube-api-access-vzqqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.266514 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700fd144-e077-4468-80a4-f131fdb9d67e-kube-api-access-psgfb" (OuterVolumeSpecName: "kube-api-access-psgfb") pod "700fd144-e077-4468-80a4-f131fdb9d67e" (UID: "700fd144-e077-4468-80a4-f131fdb9d67e"). InnerVolumeSpecName "kube-api-access-psgfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.361543 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzqqn\" (UniqueName: \"kubernetes.io/projected/05b15da5-701a-492a-b986-99b767d2876c-kube-api-access-vzqqn\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.361576 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psgfb\" (UniqueName: \"kubernetes.io/projected/700fd144-e077-4468-80a4-f131fdb9d67e-kube-api-access-psgfb\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.361586 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/700fd144-e077-4468-80a4-f131fdb9d67e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.361595 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05b15da5-701a-492a-b986-99b767d2876c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.939029 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc04-account-create-update-dmf9z" event={"ID":"05b15da5-701a-492a-b986-99b767d2876c","Type":"ContainerDied","Data":"8e600340158d54f5ebb4c03dc507ad12f985664e178179ccd963dddb0bb69620"} Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.939119 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e600340158d54f5ebb4c03dc507ad12f985664e178179ccd963dddb0bb69620" Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.939597 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc04-account-create-update-dmf9z" Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.942295 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9cp2h" event={"ID":"700fd144-e077-4468-80a4-f131fdb9d67e","Type":"ContainerDied","Data":"03a5a65957944d9b807e9bb77848eeeb10aa95967ed1b4ac444e66a5d8f6c6f2"} Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.942376 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03a5a65957944d9b807e9bb77848eeeb10aa95967ed1b4ac444e66a5d8f6c6f2" Feb 19 15:28:34 crc kubenswrapper[4810]: I0219 15:28:34.942495 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9cp2h" Feb 19 15:28:37 crc kubenswrapper[4810]: I0219 15:28:37.563048 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8ce6-account-create-update-ztxw9" Feb 19 15:28:37 crc kubenswrapper[4810]: I0219 15:28:37.725012 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac347c6-4f1b-4b05-87a0-9332dec2ba9d-operator-scripts\") pod \"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d\" (UID: \"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d\") " Feb 19 15:28:37 crc kubenswrapper[4810]: I0219 15:28:37.725113 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqz6c\" (UniqueName: \"kubernetes.io/projected/5ac347c6-4f1b-4b05-87a0-9332dec2ba9d-kube-api-access-tqz6c\") pod \"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d\" (UID: \"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d\") " Feb 19 15:28:37 crc kubenswrapper[4810]: I0219 15:28:37.725590 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac347c6-4f1b-4b05-87a0-9332dec2ba9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ac347c6-4f1b-4b05-87a0-9332dec2ba9d" (UID: "5ac347c6-4f1b-4b05-87a0-9332dec2ba9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:37 crc kubenswrapper[4810]: I0219 15:28:37.730564 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac347c6-4f1b-4b05-87a0-9332dec2ba9d-kube-api-access-tqz6c" (OuterVolumeSpecName: "kube-api-access-tqz6c") pod "5ac347c6-4f1b-4b05-87a0-9332dec2ba9d" (UID: "5ac347c6-4f1b-4b05-87a0-9332dec2ba9d"). InnerVolumeSpecName "kube-api-access-tqz6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:37 crc kubenswrapper[4810]: I0219 15:28:37.826832 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqz6c\" (UniqueName: \"kubernetes.io/projected/5ac347c6-4f1b-4b05-87a0-9332dec2ba9d-kube-api-access-tqz6c\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:37 crc kubenswrapper[4810]: I0219 15:28:37.826867 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac347c6-4f1b-4b05-87a0-9332dec2ba9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:37 crc kubenswrapper[4810]: I0219 15:28:37.970350 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8ce6-account-create-update-ztxw9" event={"ID":"5ac347c6-4f1b-4b05-87a0-9332dec2ba9d","Type":"ContainerDied","Data":"3b7cd5c39591003c8d8b9a8e7e5c5a0c7eb4fe26dcbaf6fdb194cc285fd78923"} Feb 19 15:28:37 crc kubenswrapper[4810]: I0219 15:28:37.970400 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b7cd5c39591003c8d8b9a8e7e5c5a0c7eb4fe26dcbaf6fdb194cc285fd78923" Feb 19 15:28:37 crc kubenswrapper[4810]: I0219 15:28:37.970447 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8ce6-account-create-update-ztxw9" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.071487 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.086722 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-43b3-account-create-update-6gqq5" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.106333 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ptwwk" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.234835 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-db-sync-config-data\") pod \"31093793-65b6-467c-8d5b-218e108fd330\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.234892 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25jng\" (UniqueName: \"kubernetes.io/projected/5770188c-7480-4529-8450-3d1a44cf50d6-kube-api-access-25jng\") pod \"5770188c-7480-4529-8450-3d1a44cf50d6\" (UID: \"5770188c-7480-4529-8450-3d1a44cf50d6\") " Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.234924 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5770188c-7480-4529-8450-3d1a44cf50d6-operator-scripts\") pod \"5770188c-7480-4529-8450-3d1a44cf50d6\" (UID: \"5770188c-7480-4529-8450-3d1a44cf50d6\") " Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.234964 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5tqp\" (UniqueName: \"kubernetes.io/projected/31093793-65b6-467c-8d5b-218e108fd330-kube-api-access-h5tqp\") pod \"31093793-65b6-467c-8d5b-218e108fd330\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.235004 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-config-data\") pod \"31093793-65b6-467c-8d5b-218e108fd330\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.235023 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-combined-ca-bundle\") pod \"31093793-65b6-467c-8d5b-218e108fd330\" (UID: \"31093793-65b6-467c-8d5b-218e108fd330\") " Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.235089 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z7f9\" (UniqueName: \"kubernetes.io/projected/c2c7665c-330a-45b8-b461-bd08b069b747-kube-api-access-8z7f9\") pod \"c2c7665c-330a-45b8-b461-bd08b069b747\" (UID: \"c2c7665c-330a-45b8-b461-bd08b069b747\") " Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.235120 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2c7665c-330a-45b8-b461-bd08b069b747-operator-scripts\") pod \"c2c7665c-330a-45b8-b461-bd08b069b747\" (UID: \"c2c7665c-330a-45b8-b461-bd08b069b747\") " Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.236614 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c7665c-330a-45b8-b461-bd08b069b747-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c2c7665c-330a-45b8-b461-bd08b069b747" (UID: "c2c7665c-330a-45b8-b461-bd08b069b747"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.240066 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31093793-65b6-467c-8d5b-218e108fd330-kube-api-access-h5tqp" (OuterVolumeSpecName: "kube-api-access-h5tqp") pod "31093793-65b6-467c-8d5b-218e108fd330" (UID: "31093793-65b6-467c-8d5b-218e108fd330"). InnerVolumeSpecName "kube-api-access-h5tqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.240500 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5770188c-7480-4529-8450-3d1a44cf50d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5770188c-7480-4529-8450-3d1a44cf50d6" (UID: "5770188c-7480-4529-8450-3d1a44cf50d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.243861 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "31093793-65b6-467c-8d5b-218e108fd330" (UID: "31093793-65b6-467c-8d5b-218e108fd330"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.245923 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c7665c-330a-45b8-b461-bd08b069b747-kube-api-access-8z7f9" (OuterVolumeSpecName: "kube-api-access-8z7f9") pod "c2c7665c-330a-45b8-b461-bd08b069b747" (UID: "c2c7665c-330a-45b8-b461-bd08b069b747"). InnerVolumeSpecName "kube-api-access-8z7f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.252924 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5770188c-7480-4529-8450-3d1a44cf50d6-kube-api-access-25jng" (OuterVolumeSpecName: "kube-api-access-25jng") pod "5770188c-7480-4529-8450-3d1a44cf50d6" (UID: "5770188c-7480-4529-8450-3d1a44cf50d6"). InnerVolumeSpecName "kube-api-access-25jng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.293801 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31093793-65b6-467c-8d5b-218e108fd330" (UID: "31093793-65b6-467c-8d5b-218e108fd330"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.330500 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-config-data" (OuterVolumeSpecName: "config-data") pod "31093793-65b6-467c-8d5b-218e108fd330" (UID: "31093793-65b6-467c-8d5b-218e108fd330"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.337671 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z7f9\" (UniqueName: \"kubernetes.io/projected/c2c7665c-330a-45b8-b461-bd08b069b747-kube-api-access-8z7f9\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.338520 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2c7665c-330a-45b8-b461-bd08b069b747-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.338546 4810 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.338560 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25jng\" (UniqueName: \"kubernetes.io/projected/5770188c-7480-4529-8450-3d1a44cf50d6-kube-api-access-25jng\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.338574 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5770188c-7480-4529-8450-3d1a44cf50d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.338586 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5tqp\" (UniqueName: \"kubernetes.io/projected/31093793-65b6-467c-8d5b-218e108fd330-kube-api-access-h5tqp\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.338598 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.338609 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31093793-65b6-467c-8d5b-218e108fd330-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.979386 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-sd4lr" event={"ID":"63eeb47c-9c4a-4e36-be24-61c126517600","Type":"ContainerStarted","Data":"7e83b0c5177b1183e58ad0498417fc1c3b6e142723e7482bda0235e4615b43f5"} Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.980911 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-43b3-account-create-update-6gqq5" event={"ID":"5770188c-7480-4529-8450-3d1a44cf50d6","Type":"ContainerDied","Data":"a57eff1ff8b65e63b3a00f379e6a439481a3517ce81fa871e65130447c5a498b"} Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.981311 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a57eff1ff8b65e63b3a00f379e6a439481a3517ce81fa871e65130447c5a498b" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.980955 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-43b3-account-create-update-6gqq5" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.984065 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ptwwk" event={"ID":"c2c7665c-330a-45b8-b461-bd08b069b747","Type":"ContainerDied","Data":"fbfaae03009c8948f491fee5a2a00dca9fa1a0a5f93189fe34e02d8b5cd0dc77"} Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.984186 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbfaae03009c8948f491fee5a2a00dca9fa1a0a5f93189fe34e02d8b5cd0dc77" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.984343 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ptwwk" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.988614 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kpf4t" event={"ID":"31093793-65b6-467c-8d5b-218e108fd330","Type":"ContainerDied","Data":"bf6ea40573eeae6956f4018d71eb8ba4737a0b7a57f0e9d98686d8f2e0c053a9"} Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.988648 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6ea40573eeae6956f4018d71eb8ba4737a0b7a57f0e9d98686d8f2e0c053a9" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.988732 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kpf4t" Feb 19 15:28:38 crc kubenswrapper[4810]: I0219 15:28:38.999545 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k9zsz" event={"ID":"082fc735-2850-452d-841a-0af9ed7ed171","Type":"ContainerStarted","Data":"22a00ed65eebcb7030f20de212b927e0556118314908589176cea5b5329504cb"} Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.002985 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5c213a3a-78fd-4b42-bc1c-e09837eae684","Type":"ContainerStarted","Data":"09171322701d01cc2b65792ae9294bf2facdbb2647bec3473b5a2f2c7595769d"} Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.006063 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-sd4lr" podStartSLOduration=2.975481553 podStartE2EDuration="12.006040143s" podCreationTimestamp="2026-02-19 15:28:27 +0000 UTC" firstStartedPulling="2026-02-19 15:28:29.05535626 +0000 UTC m=+1138.537386384" lastFinishedPulling="2026-02-19 15:28:38.08591484 +0000 UTC m=+1147.567944974" observedRunningTime="2026-02-19 15:28:38.998076338 +0000 UTC m=+1148.480106462" watchObservedRunningTime="2026-02-19 15:28:39.006040143 +0000 UTC m=+1148.488070287" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.020549 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-k9zsz" podStartSLOduration=3.566756766 podStartE2EDuration="12.020533308s" podCreationTimestamp="2026-02-19 15:28:27 +0000 UTC" firstStartedPulling="2026-02-19 15:28:29.615662345 +0000 UTC m=+1139.097692469" lastFinishedPulling="2026-02-19 15:28:38.069438887 +0000 UTC m=+1147.551469011" observedRunningTime="2026-02-19 15:28:39.017533095 +0000 UTC m=+1148.499563219" watchObservedRunningTime="2026-02-19 15:28:39.020533308 +0000 UTC m=+1148.502563432" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518240 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-589c97547-9nnhp"] Feb 19 15:28:39 crc kubenswrapper[4810]: E0219 15:28:39.518630 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700fd144-e077-4468-80a4-f131fdb9d67e" containerName="mariadb-database-create" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518651 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="700fd144-e077-4468-80a4-f131fdb9d67e" containerName="mariadb-database-create" Feb 19 15:28:39 crc kubenswrapper[4810]: E0219 15:28:39.518669 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788aae13-b274-4965-ac0c-8ac075c32567" containerName="init" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518675 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="788aae13-b274-4965-ac0c-8ac075c32567" containerName="init" Feb 19 15:28:39 crc kubenswrapper[4810]: E0219 15:28:39.518681 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6217aad-07e6-49b6-8e80-41e75cecaaf5" containerName="mariadb-database-create" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518687 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6217aad-07e6-49b6-8e80-41e75cecaaf5" containerName="mariadb-database-create" Feb 19 15:28:39 crc kubenswrapper[4810]: E0219 15:28:39.518697 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b15da5-701a-492a-b986-99b767d2876c" containerName="mariadb-account-create-update" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518702 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b15da5-701a-492a-b986-99b767d2876c" containerName="mariadb-account-create-update" Feb 19 15:28:39 crc kubenswrapper[4810]: E0219 15:28:39.518715 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788aae13-b274-4965-ac0c-8ac075c32567" containerName="dnsmasq-dns" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518720 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="788aae13-b274-4965-ac0c-8ac075c32567" containerName="dnsmasq-dns" Feb 19 15:28:39 crc kubenswrapper[4810]: E0219 15:28:39.518734 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5770188c-7480-4529-8450-3d1a44cf50d6" containerName="mariadb-account-create-update" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518740 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5770188c-7480-4529-8450-3d1a44cf50d6" containerName="mariadb-account-create-update" Feb 19 15:28:39 crc kubenswrapper[4810]: E0219 15:28:39.518746 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c7665c-330a-45b8-b461-bd08b069b747" containerName="mariadb-database-create" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518752 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c7665c-330a-45b8-b461-bd08b069b747" containerName="mariadb-database-create" Feb 19 15:28:39 crc kubenswrapper[4810]: E0219 15:28:39.518763 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac347c6-4f1b-4b05-87a0-9332dec2ba9d" containerName="mariadb-account-create-update" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518769 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac347c6-4f1b-4b05-87a0-9332dec2ba9d" containerName="mariadb-account-create-update" Feb 19 15:28:39 crc kubenswrapper[4810]: E0219 15:28:39.518781 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31093793-65b6-467c-8d5b-218e108fd330" containerName="glance-db-sync" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518786 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="31093793-65b6-467c-8d5b-218e108fd330" containerName="glance-db-sync" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518930 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="5770188c-7480-4529-8450-3d1a44cf50d6" containerName="mariadb-account-create-update" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518956 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c7665c-330a-45b8-b461-bd08b069b747" containerName="mariadb-database-create" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518968 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6217aad-07e6-49b6-8e80-41e75cecaaf5" containerName="mariadb-database-create" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518985 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="31093793-65b6-467c-8d5b-218e108fd330" containerName="glance-db-sync" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.518998 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="700fd144-e077-4468-80a4-f131fdb9d67e" containerName="mariadb-database-create" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.519016 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac347c6-4f1b-4b05-87a0-9332dec2ba9d" containerName="mariadb-account-create-update" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.519031 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="788aae13-b274-4965-ac0c-8ac075c32567" containerName="dnsmasq-dns" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.519040 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b15da5-701a-492a-b986-99b767d2876c" containerName="mariadb-account-create-update" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.519877 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.554549 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589c97547-9nnhp"] Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.658544 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-dns-swift-storage-0\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.658637 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-ovsdbserver-nb\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.658668 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-ovsdbserver-sb\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.658684 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-config\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.658714 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-dns-svc\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.658763 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g84kv\" (UniqueName: \"kubernetes.io/projected/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-kube-api-access-g84kv\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.760440 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-ovsdbserver-nb\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.760486 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-ovsdbserver-sb\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.760507 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-config\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.760541 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-dns-svc\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.760591 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g84kv\" (UniqueName: \"kubernetes.io/projected/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-kube-api-access-g84kv\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.760618 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-dns-swift-storage-0\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.761726 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-dns-swift-storage-0\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.762209 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-ovsdbserver-nb\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.762708 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-ovsdbserver-sb\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.763192 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-config\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.763709 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-dns-svc\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.783300 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g84kv\" (UniqueName: \"kubernetes.io/projected/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-kube-api-access-g84kv\") pod \"dnsmasq-dns-589c97547-9nnhp\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:39 crc kubenswrapper[4810]: I0219 15:28:39.837644 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:40 crc kubenswrapper[4810]: I0219 15:28:40.368036 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589c97547-9nnhp"] Feb 19 15:28:41 crc kubenswrapper[4810]: I0219 15:28:41.026382 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5c213a3a-78fd-4b42-bc1c-e09837eae684","Type":"ContainerStarted","Data":"1cf6b1e26b593f87dbae05831261b5448c34285d54ffc513dcc21ce3c137d57f"} Feb 19 15:28:41 crc kubenswrapper[4810]: I0219 15:28:41.028047 4810 generic.go:334] "Generic (PLEG): container finished" podID="7cc2bd68-d5b3-4416-8752-edf7ee85bf88" containerID="e8fb43b2b423ed4972c4fdcf61131fbebdb2452721ad8c709c2892ac5c467178" exitCode=0 Feb 19 15:28:41 crc kubenswrapper[4810]: I0219 15:28:41.028151 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589c97547-9nnhp" event={"ID":"7cc2bd68-d5b3-4416-8752-edf7ee85bf88","Type":"ContainerDied","Data":"e8fb43b2b423ed4972c4fdcf61131fbebdb2452721ad8c709c2892ac5c467178"} Feb 19 15:28:41 crc kubenswrapper[4810]: I0219 15:28:41.028368 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589c97547-9nnhp" event={"ID":"7cc2bd68-d5b3-4416-8752-edf7ee85bf88","Type":"ContainerStarted","Data":"6cdfc0e390a65c8dc0f4b9e7aa2f77b612821b90bf368ae80a8f0a6cf661f875"} Feb 19 15:28:42 crc kubenswrapper[4810]: I0219 15:28:42.039431 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5c213a3a-78fd-4b42-bc1c-e09837eae684","Type":"ContainerStarted","Data":"ab34c252f979f018755a4fd37fb43b24f2dc32b972b84fea55031898aade6962"} Feb 19 15:28:42 crc kubenswrapper[4810]: I0219 15:28:42.041541 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589c97547-9nnhp" event={"ID":"7cc2bd68-d5b3-4416-8752-edf7ee85bf88","Type":"ContainerStarted","Data":"b3c769533587d67c2cf2579dac0d7fe9ba6b6dac50efc5e3e91af785d9090a99"} Feb 19 15:28:42 crc kubenswrapper[4810]: I0219 15:28:42.041816 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:42 crc kubenswrapper[4810]: I0219 15:28:42.065877 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:42 crc kubenswrapper[4810]: I0219 15:28:42.075768 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=21.075746813 podStartE2EDuration="21.075746813s" podCreationTimestamp="2026-02-19 15:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:42.065679237 +0000 UTC m=+1151.547709381" watchObservedRunningTime="2026-02-19 15:28:42.075746813 +0000 UTC m=+1151.557776947" Feb 19 15:28:42 crc kubenswrapper[4810]: I0219 15:28:42.092304 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-589c97547-9nnhp" podStartSLOduration=3.092284178 podStartE2EDuration="3.092284178s" podCreationTimestamp="2026-02-19 15:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:42.083005131 +0000 UTC m=+1151.565035265" watchObservedRunningTime="2026-02-19 15:28:42.092284178 +0000 UTC m=+1151.574314312" Feb 19 15:28:45 crc kubenswrapper[4810]: I0219 15:28:45.083540 4810 generic.go:334] "Generic (PLEG): container finished" podID="63eeb47c-9c4a-4e36-be24-61c126517600" containerID="7e83b0c5177b1183e58ad0498417fc1c3b6e142723e7482bda0235e4615b43f5" exitCode=0 Feb 19 15:28:45 crc kubenswrapper[4810]: I0219 15:28:45.083716 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-sd4lr" event={"ID":"63eeb47c-9c4a-4e36-be24-61c126517600","Type":"ContainerDied","Data":"7e83b0c5177b1183e58ad0498417fc1c3b6e142723e7482bda0235e4615b43f5"} Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.099937 4810 generic.go:334] "Generic (PLEG): container finished" podID="082fc735-2850-452d-841a-0af9ed7ed171" containerID="22a00ed65eebcb7030f20de212b927e0556118314908589176cea5b5329504cb" exitCode=0 Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.100096 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k9zsz" event={"ID":"082fc735-2850-452d-841a-0af9ed7ed171","Type":"ContainerDied","Data":"22a00ed65eebcb7030f20de212b927e0556118314908589176cea5b5329504cb"} Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.540347 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.593729 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-combined-ca-bundle\") pod \"63eeb47c-9c4a-4e36-be24-61c126517600\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.593805 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-config-data\") pod \"63eeb47c-9c4a-4e36-be24-61c126517600\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.593900 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-db-sync-config-data\") pod \"63eeb47c-9c4a-4e36-be24-61c126517600\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.593960 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnkbf\" (UniqueName: \"kubernetes.io/projected/63eeb47c-9c4a-4e36-be24-61c126517600-kube-api-access-mnkbf\") pod \"63eeb47c-9c4a-4e36-be24-61c126517600\" (UID: \"63eeb47c-9c4a-4e36-be24-61c126517600\") " Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.599887 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63eeb47c-9c4a-4e36-be24-61c126517600-kube-api-access-mnkbf" (OuterVolumeSpecName: "kube-api-access-mnkbf") pod "63eeb47c-9c4a-4e36-be24-61c126517600" (UID: "63eeb47c-9c4a-4e36-be24-61c126517600"). InnerVolumeSpecName "kube-api-access-mnkbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.600698 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "63eeb47c-9c4a-4e36-be24-61c126517600" (UID: "63eeb47c-9c4a-4e36-be24-61c126517600"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.626874 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63eeb47c-9c4a-4e36-be24-61c126517600" (UID: "63eeb47c-9c4a-4e36-be24-61c126517600"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.658598 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-config-data" (OuterVolumeSpecName: "config-data") pod "63eeb47c-9c4a-4e36-be24-61c126517600" (UID: "63eeb47c-9c4a-4e36-be24-61c126517600"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.696472 4810 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.696513 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnkbf\" (UniqueName: \"kubernetes.io/projected/63eeb47c-9c4a-4e36-be24-61c126517600-kube-api-access-mnkbf\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.696576 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:46 crc kubenswrapper[4810]: I0219 15:28:46.696590 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63eeb47c-9c4a-4e36-be24-61c126517600-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:47 crc kubenswrapper[4810]: I0219 15:28:47.115822 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-sd4lr" event={"ID":"63eeb47c-9c4a-4e36-be24-61c126517600","Type":"ContainerDied","Data":"490bfe3163d0e855ea419f13cf8a5bebf9306234f76375e3967a08a8c33723b7"} Feb 19 15:28:47 crc kubenswrapper[4810]: I0219 15:28:47.115911 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="490bfe3163d0e855ea419f13cf8a5bebf9306234f76375e3967a08a8c33723b7" Feb 19 15:28:47 crc kubenswrapper[4810]: I0219 15:28:47.115845 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-sd4lr" Feb 19 15:28:47 crc kubenswrapper[4810]: I0219 15:28:47.537354 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k9zsz" Feb 19 15:28:47 crc kubenswrapper[4810]: I0219 15:28:47.612574 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082fc735-2850-452d-841a-0af9ed7ed171-combined-ca-bundle\") pod \"082fc735-2850-452d-841a-0af9ed7ed171\" (UID: \"082fc735-2850-452d-841a-0af9ed7ed171\") " Feb 19 15:28:47 crc kubenswrapper[4810]: I0219 15:28:47.612681 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4qlb\" (UniqueName: \"kubernetes.io/projected/082fc735-2850-452d-841a-0af9ed7ed171-kube-api-access-t4qlb\") pod \"082fc735-2850-452d-841a-0af9ed7ed171\" (UID: \"082fc735-2850-452d-841a-0af9ed7ed171\") " Feb 19 15:28:47 crc kubenswrapper[4810]: I0219 15:28:47.612706 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082fc735-2850-452d-841a-0af9ed7ed171-config-data\") pod \"082fc735-2850-452d-841a-0af9ed7ed171\" (UID: \"082fc735-2850-452d-841a-0af9ed7ed171\") " Feb 19 15:28:47 crc kubenswrapper[4810]: I0219 15:28:47.616753 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/082fc735-2850-452d-841a-0af9ed7ed171-kube-api-access-t4qlb" (OuterVolumeSpecName: "kube-api-access-t4qlb") pod "082fc735-2850-452d-841a-0af9ed7ed171" (UID: "082fc735-2850-452d-841a-0af9ed7ed171"). InnerVolumeSpecName "kube-api-access-t4qlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:47 crc kubenswrapper[4810]: I0219 15:28:47.634175 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082fc735-2850-452d-841a-0af9ed7ed171-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "082fc735-2850-452d-841a-0af9ed7ed171" (UID: "082fc735-2850-452d-841a-0af9ed7ed171"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:28:47 crc kubenswrapper[4810]: I0219 15:28:47.655990 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082fc735-2850-452d-841a-0af9ed7ed171-config-data" (OuterVolumeSpecName: "config-data") pod "082fc735-2850-452d-841a-0af9ed7ed171" (UID: "082fc735-2850-452d-841a-0af9ed7ed171"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:28:47 crc kubenswrapper[4810]: I0219 15:28:47.717786 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082fc735-2850-452d-841a-0af9ed7ed171-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:47 crc kubenswrapper[4810]: I0219 15:28:47.717812 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082fc735-2850-452d-841a-0af9ed7ed171-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:47 crc kubenswrapper[4810]: I0219 15:28:47.717826 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4qlb\" (UniqueName: \"kubernetes.io/projected/082fc735-2850-452d-841a-0af9ed7ed171-kube-api-access-t4qlb\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.129843 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k9zsz" event={"ID":"082fc735-2850-452d-841a-0af9ed7ed171","Type":"ContainerDied","Data":"866dbef12abd89bdc5dcdad8828a3677ff57d6665594483a4a2f0d6d9a3ec62c"} Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.129899 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="866dbef12abd89bdc5dcdad8828a3677ff57d6665594483a4a2f0d6d9a3ec62c" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.129961 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k9zsz" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.386630 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589c97547-9nnhp"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.387376 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-589c97547-9nnhp" podUID="7cc2bd68-d5b3-4416-8752-edf7ee85bf88" containerName="dnsmasq-dns" containerID="cri-o://b3c769533587d67c2cf2579dac0d7fe9ba6b6dac50efc5e3e91af785d9090a99" gracePeriod=10 Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.388483 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.425517 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744c9b5447-bdbq7"] Feb 19 15:28:48 crc kubenswrapper[4810]: E0219 15:28:48.425919 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63eeb47c-9c4a-4e36-be24-61c126517600" containerName="watcher-db-sync" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.425936 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="63eeb47c-9c4a-4e36-be24-61c126517600" containerName="watcher-db-sync" Feb 19 15:28:48 crc kubenswrapper[4810]: E0219 15:28:48.425962 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="082fc735-2850-452d-841a-0af9ed7ed171" containerName="keystone-db-sync" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.425969 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="082fc735-2850-452d-841a-0af9ed7ed171" containerName="keystone-db-sync" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.426140 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="082fc735-2850-452d-841a-0af9ed7ed171" containerName="keystone-db-sync" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.426165 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="63eeb47c-9c4a-4e36-be24-61c126517600" containerName="watcher-db-sync" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.427063 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.454137 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2f6g5"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.455311 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.457696 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.458251 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j78zz" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.462070 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.462101 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.462349 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.464970 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744c9b5447-bdbq7"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.527419 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2f6g5"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.545056 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-dns-swift-storage-0\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.545095 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-scripts\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.545130 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf6bv\" (UniqueName: \"kubernetes.io/projected/32e59d75-7087-41ab-8571-5e8830baeec0-kube-api-access-cf6bv\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.545145 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-combined-ca-bundle\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.545173 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-config\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.545193 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-config-data\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.545208 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlnqn\" (UniqueName: \"kubernetes.io/projected/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-kube-api-access-qlnqn\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.545244 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-dns-svc\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.545258 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-ovsdbserver-sb\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.545298 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-credential-keys\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.545315 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-fernet-keys\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.545357 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-ovsdbserver-nb\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.627210 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.628372 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.633674 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.633838 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-ngb4n" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.640273 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.641663 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.644081 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.646280 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-dns-svc\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.646318 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-ovsdbserver-sb\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.646411 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-credential-keys\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.646431 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-fernet-keys\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.646452 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-ovsdbserver-nb\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.646536 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-dns-swift-storage-0\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.646563 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-scripts\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.646592 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf6bv\" (UniqueName: \"kubernetes.io/projected/32e59d75-7087-41ab-8571-5e8830baeec0-kube-api-access-cf6bv\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.646610 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-combined-ca-bundle\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.646640 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-config\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.646665 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-config-data\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.646684 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlnqn\" (UniqueName: \"kubernetes.io/projected/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-kube-api-access-qlnqn\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.649954 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-dns-swift-storage-0\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.650852 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-dns-svc\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.651536 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-config\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.654668 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-ovsdbserver-sb\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.657495 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-ovsdbserver-nb\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.664281 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-combined-ca-bundle\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.664352 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.665754 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.667406 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.671003 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-fernet-keys\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.679593 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-config-data\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.682022 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-scripts\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.683155 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlnqn\" (UniqueName: \"kubernetes.io/projected/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-kube-api-access-qlnqn\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.683919 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-credential-keys\") pod \"keystone-bootstrap-2f6g5\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.687104 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.694037 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf6bv\" (UniqueName: \"kubernetes.io/projected/32e59d75-7087-41ab-8571-5e8830baeec0-kube-api-access-cf6bv\") pod \"dnsmasq-dns-744c9b5447-bdbq7\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.723200 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.741288 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-j989d"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.742588 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j989d" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749401 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749450 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfnzr\" (UniqueName: \"kubernetes.io/projected/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-kube-api-access-mfnzr\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749525 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-logs\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749546 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be760b2-263c-4b89-8bdf-ecf98114a24f-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " pod="openstack/watcher-applier-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749566 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bwsx\" (UniqueName: \"kubernetes.io/projected/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-kube-api-access-2bwsx\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749587 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-logs\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749614 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjt9b\" (UniqueName: \"kubernetes.io/projected/4be760b2-263c-4b89-8bdf-ecf98114a24f-kube-api-access-kjt9b\") pod \"watcher-applier-0\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " pod="openstack/watcher-applier-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749651 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be760b2-263c-4b89-8bdf-ecf98114a24f-logs\") pod \"watcher-applier-0\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " pod="openstack/watcher-applier-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749682 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749703 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749726 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749756 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749777 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-config-data\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.749798 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be760b2-263c-4b89-8bdf-ecf98114a24f-config-data\") pod \"watcher-applier-0\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " pod="openstack/watcher-applier-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.750065 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.750811 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xcpvh" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.750946 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.751083 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.765865 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-j989d"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.778557 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.811666 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.859461 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-svmgl"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.866365 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.873933 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.874206 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.874365 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nxm2z" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.878291 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.878348 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-config-data\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.878471 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be760b2-263c-4b89-8bdf-ecf98114a24f-config-data\") pod \"watcher-applier-0\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " pod="openstack/watcher-applier-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.878592 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.878659 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-config\") pod \"neutron-db-sync-j989d\" (UID: \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\") " pod="openstack/neutron-db-sync-j989d" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.878681 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfnzr\" (UniqueName: \"kubernetes.io/projected/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-kube-api-access-mfnzr\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.878907 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-combined-ca-bundle\") pod \"neutron-db-sync-j989d\" (UID: \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\") " pod="openstack/neutron-db-sync-j989d" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.879218 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be760b2-263c-4b89-8bdf-ecf98114a24f-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " pod="openstack/watcher-applier-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.879235 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-logs\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.879297 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bwsx\" (UniqueName: \"kubernetes.io/projected/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-kube-api-access-2bwsx\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.879352 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-logs\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.879399 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjt9b\" (UniqueName: \"kubernetes.io/projected/4be760b2-263c-4b89-8bdf-ecf98114a24f-kube-api-access-kjt9b\") pod \"watcher-applier-0\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " pod="openstack/watcher-applier-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.879548 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sllpm\" (UniqueName: \"kubernetes.io/projected/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-kube-api-access-sllpm\") pod \"neutron-db-sync-j989d\" (UID: \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\") " pod="openstack/neutron-db-sync-j989d" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.879941 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be760b2-263c-4b89-8bdf-ecf98114a24f-logs\") pod \"watcher-applier-0\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " pod="openstack/watcher-applier-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.880043 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.880091 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.880131 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.883391 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-logs\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.890594 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-svmgl"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.890969 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-logs\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.891371 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be760b2-263c-4b89-8bdf-ecf98114a24f-logs\") pod \"watcher-applier-0\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " pod="openstack/watcher-applier-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.904280 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-849c785789-5xrh2"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.905590 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.914610 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.914737 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.916501 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.917158 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.917265 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.917439 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-f4mj6" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.918397 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be760b2-263c-4b89-8bdf-ecf98114a24f-config-data\") pod \"watcher-applier-0\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " pod="openstack/watcher-applier-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.921435 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-849c785789-5xrh2"] Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.953604 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be760b2-263c-4b89-8bdf-ecf98114a24f-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " pod="openstack/watcher-applier-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.963080 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.963970 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-config-data\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:48 crc kubenswrapper[4810]: I0219 15:28:48.965030 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.007915 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.013778 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-etc-machine-id\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.013866 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqnf7\" (UniqueName: \"kubernetes.io/projected/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-kube-api-access-cqnf7\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.014032 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-scripts\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.014065 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcbv5\" (UniqueName: \"kubernetes.io/projected/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-kube-api-access-pcbv5\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.014084 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-combined-ca-bundle\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.014254 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-config\") pod \"neutron-db-sync-j989d\" (UID: \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\") " pod="openstack/neutron-db-sync-j989d" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.014290 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-config-data\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.014319 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-scripts\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.027434 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfnzr\" (UniqueName: \"kubernetes.io/projected/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-kube-api-access-mfnzr\") pod \"watcher-api-0\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " pod="openstack/watcher-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.033207 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-combined-ca-bundle\") pod \"neutron-db-sync-j989d\" (UID: \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\") " pod="openstack/neutron-db-sync-j989d" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.033300 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-db-sync-config-data\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.033360 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-horizon-secret-key\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.033433 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sllpm\" (UniqueName: \"kubernetes.io/projected/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-kube-api-access-sllpm\") pod \"neutron-db-sync-j989d\" (UID: \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\") " pod="openstack/neutron-db-sync-j989d" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.033461 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-logs\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.033504 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-config-data\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.044164 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bwsx\" (UniqueName: \"kubernetes.io/projected/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-kube-api-access-2bwsx\") pod \"watcher-decision-engine-0\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.060306 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjt9b\" (UniqueName: \"kubernetes.io/projected/4be760b2-263c-4b89-8bdf-ecf98114a24f-kube-api-access-kjt9b\") pod \"watcher-applier-0\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " pod="openstack/watcher-applier-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.074398 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sllpm\" (UniqueName: \"kubernetes.io/projected/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-kube-api-access-sllpm\") pod \"neutron-db-sync-j989d\" (UID: \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\") " pod="openstack/neutron-db-sync-j989d" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.077574 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.093966 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-combined-ca-bundle\") pod \"neutron-db-sync-j989d\" (UID: \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\") " pod="openstack/neutron-db-sync-j989d" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.120922 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-config\") pod \"neutron-db-sync-j989d\" (UID: \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\") " pod="openstack/neutron-db-sync-j989d" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.123746 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.124309 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.127371 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.128804 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.133219 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.133391 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.134050 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lrdct" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.134190 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.135937 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-logs\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.136001 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-config-data\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.136042 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-etc-machine-id\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.136069 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqnf7\" (UniqueName: \"kubernetes.io/projected/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-kube-api-access-cqnf7\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.136107 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-scripts\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.136124 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcbv5\" (UniqueName: \"kubernetes.io/projected/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-kube-api-access-pcbv5\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.136140 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-combined-ca-bundle\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.136173 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-config-data\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.136193 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-scripts\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.136244 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-db-sync-config-data\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.136265 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-horizon-secret-key\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.137638 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-scripts\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.137684 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j989d" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.138416 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-config-data\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.138248 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-logs\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.143910 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-horizon-secret-key\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.148916 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-etc-machine-id\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.153937 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-combined-ca-bundle\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.154436 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-scripts\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.159252 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-config-data\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.180269 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.188819 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqnf7\" (UniqueName: \"kubernetes.io/projected/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-kube-api-access-cqnf7\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.202109 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcbv5\" (UniqueName: \"kubernetes.io/projected/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-kube-api-access-pcbv5\") pod \"horizon-849c785789-5xrh2\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.202234 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-db-sync-config-data\") pod \"cinder-db-sync-svmgl\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.214433 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744c9b5447-bdbq7"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.236197 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.237193 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.237253 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a0789f1-099a-4f95-9626-a5ad7da804bc-logs\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.237271 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a0789f1-099a-4f95-9626-a5ad7da804bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.237306 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.237364 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.237378 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f8d7\" (UniqueName: \"kubernetes.io/projected/7a0789f1-099a-4f95-9626-a5ad7da804bc-kube-api-access-8f8d7\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.237421 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.237511 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.248204 4810 generic.go:334] "Generic (PLEG): container finished" podID="7cc2bd68-d5b3-4416-8752-edf7ee85bf88" containerID="b3c769533587d67c2cf2579dac0d7fe9ba6b6dac50efc5e3e91af785d9090a99" exitCode=0 Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.248249 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589c97547-9nnhp" event={"ID":"7cc2bd68-d5b3-4416-8752-edf7ee85bf88","Type":"ContainerDied","Data":"b3c769533587d67c2cf2579dac0d7fe9ba6b6dac50efc5e3e91af785d9090a99"} Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.252563 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f4cfd6f6c-s7m64"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.253984 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.260261 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7jdcp"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.263972 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.267808 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-qpnp7" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.268023 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.268213 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.268372 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7jdcp"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.275809 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f4cfd6f6c-s7m64"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.319727 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-hmc6k"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.321800 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hmc6k" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.323496 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.323669 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mx2zj" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.327488 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hmc6k"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.338198 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fbd85f69f-5jhnw"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.338504 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.338535 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-dns-svc\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.338555 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.338570 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f8d7\" (UniqueName: \"kubernetes.io/projected/7a0789f1-099a-4f95-9626-a5ad7da804bc-kube-api-access-8f8d7\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.338589 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.338827 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btmxg\" (UniqueName: \"kubernetes.io/projected/2024a783-c3f9-4e57-b00f-52bec164e64e-kube-api-access-btmxg\") pod \"barbican-db-sync-hmc6k\" (UID: \"2024a783-c3f9-4e57-b00f-52bec164e64e\") " pod="openstack/barbican-db-sync-hmc6k" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.338851 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.338871 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2024a783-c3f9-4e57-b00f-52bec164e64e-combined-ca-bundle\") pod \"barbican-db-sync-hmc6k\" (UID: \"2024a783-c3f9-4e57-b00f-52bec164e64e\") " pod="openstack/barbican-db-sync-hmc6k" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.338922 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qksxk\" (UniqueName: \"kubernetes.io/projected/36fe6fdb-2970-4773-8184-a2d16b8ca89a-kube-api-access-qksxk\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.338943 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36fe6fdb-2970-4773-8184-a2d16b8ca89a-logs\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.339094 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-scripts\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.341012 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.342373 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nljdr\" (UniqueName: \"kubernetes.io/projected/18ca6546-69fd-492d-81c5-bb18c56b045d-kube-api-access-nljdr\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.342429 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.342449 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-combined-ca-bundle\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.342481 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.342499 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2024a783-c3f9-4e57-b00f-52bec164e64e-db-sync-config-data\") pod \"barbican-db-sync-hmc6k\" (UID: \"2024a783-c3f9-4e57-b00f-52bec164e64e\") " pod="openstack/barbican-db-sync-hmc6k" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.342557 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-config-data\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.342577 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.342604 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-config\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.342639 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a0789f1-099a-4f95-9626-a5ad7da804bc-logs\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.342661 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a0789f1-099a-4f95-9626-a5ad7da804bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.342708 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-dns-swift-storage-0\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.342914 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.343197 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.343506 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a0789f1-099a-4f95-9626-a5ad7da804bc-logs\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.343734 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a0789f1-099a-4f95-9626-a5ad7da804bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.348927 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.355800 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.362130 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.369618 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.372407 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.375113 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.376676 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.379381 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fbd85f69f-5jhnw"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.380119 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.380318 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f8d7\" (UniqueName: \"kubernetes.io/projected/7a0789f1-099a-4f95-9626-a5ad7da804bc-kube-api-access-8f8d7\") pod \"glance-default-external-api-0\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.396725 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.411374 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.423442 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:28:49 crc kubenswrapper[4810]: E0219 15:28:49.423826 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc2bd68-d5b3-4416-8752-edf7ee85bf88" containerName="dnsmasq-dns" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.423838 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc2bd68-d5b3-4416-8752-edf7ee85bf88" containerName="dnsmasq-dns" Feb 19 15:28:49 crc kubenswrapper[4810]: E0219 15:28:49.423873 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc2bd68-d5b3-4416-8752-edf7ee85bf88" containerName="init" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.423880 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc2bd68-d5b3-4416-8752-edf7ee85bf88" containerName="init" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.424028 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc2bd68-d5b3-4416-8752-edf7ee85bf88" containerName="dnsmasq-dns" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.425904 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.428603 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.431210 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.432976 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445112 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-combined-ca-bundle\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445163 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445181 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2024a783-c3f9-4e57-b00f-52bec164e64e-db-sync-config-data\") pod \"barbican-db-sync-hmc6k\" (UID: \"2024a783-c3f9-4e57-b00f-52bec164e64e\") " pod="openstack/barbican-db-sync-hmc6k" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445223 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-config-data\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445251 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-config\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445306 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-dns-swift-storage-0\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445354 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-dns-svc\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445381 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445413 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btmxg\" (UniqueName: \"kubernetes.io/projected/2024a783-c3f9-4e57-b00f-52bec164e64e-kube-api-access-btmxg\") pod \"barbican-db-sync-hmc6k\" (UID: \"2024a783-c3f9-4e57-b00f-52bec164e64e\") " pod="openstack/barbican-db-sync-hmc6k" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445447 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2024a783-c3f9-4e57-b00f-52bec164e64e-combined-ca-bundle\") pod \"barbican-db-sync-hmc6k\" (UID: \"2024a783-c3f9-4e57-b00f-52bec164e64e\") " pod="openstack/barbican-db-sync-hmc6k" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445485 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qksxk\" (UniqueName: \"kubernetes.io/projected/36fe6fdb-2970-4773-8184-a2d16b8ca89a-kube-api-access-qksxk\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445513 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36fe6fdb-2970-4773-8184-a2d16b8ca89a-logs\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445545 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-scripts\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.445569 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nljdr\" (UniqueName: \"kubernetes.io/projected/18ca6546-69fd-492d-81c5-bb18c56b045d-kube-api-access-nljdr\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.446582 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-config\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.447035 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.447171 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-dns-swift-storage-0\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.447668 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-dns-svc\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.448150 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.449592 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36fe6fdb-2970-4773-8184-a2d16b8ca89a-logs\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.453038 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2024a783-c3f9-4e57-b00f-52bec164e64e-combined-ca-bundle\") pod \"barbican-db-sync-hmc6k\" (UID: \"2024a783-c3f9-4e57-b00f-52bec164e64e\") " pod="openstack/barbican-db-sync-hmc6k" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.456617 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-scripts\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.467644 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-config-data\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.474719 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2024a783-c3f9-4e57-b00f-52bec164e64e-db-sync-config-data\") pod \"barbican-db-sync-hmc6k\" (UID: \"2024a783-c3f9-4e57-b00f-52bec164e64e\") " pod="openstack/barbican-db-sync-hmc6k" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.500175 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-combined-ca-bundle\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.510225 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btmxg\" (UniqueName: \"kubernetes.io/projected/2024a783-c3f9-4e57-b00f-52bec164e64e-kube-api-access-btmxg\") pod \"barbican-db-sync-hmc6k\" (UID: \"2024a783-c3f9-4e57-b00f-52bec164e64e\") " pod="openstack/barbican-db-sync-hmc6k" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.511694 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-svmgl" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.529222 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.538784 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.538846 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.552477 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qksxk\" (UniqueName: \"kubernetes.io/projected/36fe6fdb-2970-4773-8184-a2d16b8ca89a-kube-api-access-qksxk\") pod \"placement-db-sync-7jdcp\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.552543 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-dns-svc\") pod \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.552634 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g84kv\" (UniqueName: \"kubernetes.io/projected/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-kube-api-access-g84kv\") pod \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.552674 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-config\") pod \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.552718 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-dns-swift-storage-0\") pod \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.552767 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-ovsdbserver-sb\") pod \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.552789 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-ovsdbserver-nb\") pod \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\" (UID: \"7cc2bd68-d5b3-4416-8752-edf7ee85bf88\") " Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553493 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8cd44d0-7395-44e1-9112-9e8bb4198b93-log-httpd\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553560 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553611 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553656 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-scripts\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553694 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2b86\" (UniqueName: \"kubernetes.io/projected/9c2f952c-2122-43d9-b006-6967fd2b9029-kube-api-access-n2b86\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553728 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/978c1383-cc82-4788-beea-b1e15b25eb1f-horizon-secret-key\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553747 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c2f952c-2122-43d9-b006-6967fd2b9029-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553770 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553797 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553819 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/978c1383-cc82-4788-beea-b1e15b25eb1f-logs\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553838 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c2f952c-2122-43d9-b006-6967fd2b9029-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553867 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8cd44d0-7395-44e1-9112-9e8bb4198b93-run-httpd\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553889 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/978c1383-cc82-4788-beea-b1e15b25eb1f-scripts\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553910 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd77w\" (UniqueName: \"kubernetes.io/projected/b8cd44d0-7395-44e1-9112-9e8bb4198b93-kube-api-access-jd77w\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553950 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.553995 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.554042 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.554089 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/978c1383-cc82-4788-beea-b1e15b25eb1f-config-data\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.554130 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4ql7\" (UniqueName: \"kubernetes.io/projected/978c1383-cc82-4788-beea-b1e15b25eb1f-kube-api-access-h4ql7\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.554172 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-config-data\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.571089 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nljdr\" (UniqueName: \"kubernetes.io/projected/18ca6546-69fd-492d-81c5-bb18c56b045d-kube-api-access-nljdr\") pod \"dnsmasq-dns-5f4cfd6f6c-s7m64\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.610610 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-kube-api-access-g84kv" (OuterVolumeSpecName: "kube-api-access-g84kv") pod "7cc2bd68-d5b3-4416-8752-edf7ee85bf88" (UID: "7cc2bd68-d5b3-4416-8752-edf7ee85bf88"). InnerVolumeSpecName "kube-api-access-g84kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.611383 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.616928 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7jdcp" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656051 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656099 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/978c1383-cc82-4788-beea-b1e15b25eb1f-logs\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656118 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c2f952c-2122-43d9-b006-6967fd2b9029-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656164 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8cd44d0-7395-44e1-9112-9e8bb4198b93-run-httpd\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656185 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/978c1383-cc82-4788-beea-b1e15b25eb1f-scripts\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656203 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd77w\" (UniqueName: \"kubernetes.io/projected/b8cd44d0-7395-44e1-9112-9e8bb4198b93-kube-api-access-jd77w\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656255 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656305 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656392 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656411 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/978c1383-cc82-4788-beea-b1e15b25eb1f-config-data\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656429 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4ql7\" (UniqueName: \"kubernetes.io/projected/978c1383-cc82-4788-beea-b1e15b25eb1f-kube-api-access-h4ql7\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656442 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-config-data\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656462 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8cd44d0-7395-44e1-9112-9e8bb4198b93-log-httpd\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656483 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656502 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656539 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-scripts\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656557 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2b86\" (UniqueName: \"kubernetes.io/projected/9c2f952c-2122-43d9-b006-6967fd2b9029-kube-api-access-n2b86\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656587 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/978c1383-cc82-4788-beea-b1e15b25eb1f-horizon-secret-key\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656602 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c2f952c-2122-43d9-b006-6967fd2b9029-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656622 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.656672 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g84kv\" (UniqueName: \"kubernetes.io/projected/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-kube-api-access-g84kv\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.658115 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/978c1383-cc82-4788-beea-b1e15b25eb1f-scripts\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.659172 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8cd44d0-7395-44e1-9112-9e8bb4198b93-log-httpd\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.662411 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/978c1383-cc82-4788-beea-b1e15b25eb1f-config-data\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.665537 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.666998 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/978c1383-cc82-4788-beea-b1e15b25eb1f-logs\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.670507 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c2f952c-2122-43d9-b006-6967fd2b9029-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.674050 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8cd44d0-7395-44e1-9112-9e8bb4198b93-run-httpd\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.676622 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c2f952c-2122-43d9-b006-6967fd2b9029-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.676737 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hmc6k" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.677735 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.716305 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/978c1383-cc82-4788-beea-b1e15b25eb1f-horizon-secret-key\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.716800 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.718642 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.719025 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.719712 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.721869 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2b86\" (UniqueName: \"kubernetes.io/projected/9c2f952c-2122-43d9-b006-6967fd2b9029-kube-api-access-n2b86\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.721905 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-scripts\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.723147 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-config-data\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.737833 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.740458 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4ql7\" (UniqueName: \"kubernetes.io/projected/978c1383-cc82-4788-beea-b1e15b25eb1f-kube-api-access-h4ql7\") pod \"horizon-5fbd85f69f-5jhnw\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.740698 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd77w\" (UniqueName: \"kubernetes.io/projected/b8cd44d0-7395-44e1-9112-9e8bb4198b93-kube-api-access-jd77w\") pod \"ceilometer-0\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " pod="openstack/ceilometer-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.781521 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.880732 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7cc2bd68-d5b3-4416-8752-edf7ee85bf88" (UID: "7cc2bd68-d5b3-4416-8752-edf7ee85bf88"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.951597 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7cc2bd68-d5b3-4416-8752-edf7ee85bf88" (UID: "7cc2bd68-d5b3-4416-8752-edf7ee85bf88"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.962389 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.962413 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.973919 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-config" (OuterVolumeSpecName: "config") pod "7cc2bd68-d5b3-4416-8752-edf7ee85bf88" (UID: "7cc2bd68-d5b3-4416-8752-edf7ee85bf88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.974681 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7cc2bd68-d5b3-4416-8752-edf7ee85bf88" (UID: "7cc2bd68-d5b3-4416-8752-edf7ee85bf88"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.975290 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7cc2bd68-d5b3-4416-8752-edf7ee85bf88" (UID: "7cc2bd68-d5b3-4416-8752-edf7ee85bf88"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:49 crc kubenswrapper[4810]: I0219 15:28:49.994311 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.027288 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.064123 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.064150 4810 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.064161 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cc2bd68-d5b3-4416-8752-edf7ee85bf88-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.277533 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2f6g5"] Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.277887 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744c9b5447-bdbq7"] Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.286786 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.292355 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.313016 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" event={"ID":"32e59d75-7087-41ab-8571-5e8830baeec0","Type":"ContainerStarted","Data":"f5ecac10f7282bb7c7be67fe8fb7362bec2d603a414fe0b250f0f959893cb30e"} Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.314591 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2f6g5" event={"ID":"5a2a904f-47f9-40da-bc5f-aba73c4c1c57","Type":"ContainerStarted","Data":"e83e19f0a88b6d8caed6cea2c56d5be687bd87f6e4bb330f1df96f263a9c3db4"} Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.326848 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589c97547-9nnhp" event={"ID":"7cc2bd68-d5b3-4416-8752-edf7ee85bf88","Type":"ContainerDied","Data":"6cdfc0e390a65c8dc0f4b9e7aa2f77b612821b90bf368ae80a8f0a6cf661f875"} Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.326899 4810 scope.go:117] "RemoveContainer" containerID="b3c769533587d67c2cf2579dac0d7fe9ba6b6dac50efc5e3e91af785d9090a99" Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.326922 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589c97547-9nnhp" Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.375846 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589c97547-9nnhp"] Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.386692 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-589c97547-9nnhp"] Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.422593 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.451486 4810 scope.go:117] "RemoveContainer" containerID="e8fb43b2b423ed4972c4fdcf61131fbebdb2452721ad8c709c2892ac5c467178" Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.896746 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-j989d"] Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.957633 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-849c785789-5xrh2"] Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.975905 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hmc6k"] Feb 19 15:28:50 crc kubenswrapper[4810]: I0219 15:28:50.999101 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.010682 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f4cfd6f6c-s7m64"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.019704 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-svmgl"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.031968 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7jdcp"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.039698 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.280147 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.297385 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.320077 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fbd85f69f-5jhnw"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.345735 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fbd85f69f-5jhnw"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.351259 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a0789f1-099a-4f95-9626-a5ad7da804bc","Type":"ContainerStarted","Data":"123c419edcd4407399df0f1f1452e33c4f00d373cd040db3a818757999826cf9"} Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.359959 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-668f7d7fb5-l5kpq"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.361880 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hmc6k" event={"ID":"2024a783-c3f9-4e57-b00f-52bec164e64e","Type":"ContainerStarted","Data":"398a9dcb2e74ac56cd2827ea038790abeb16b5f6b3573a48b306842a166c3f44"} Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.361979 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.381383 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5","Type":"ContainerStarted","Data":"3750609201e07e960ce122b5fe6baad963df212daffe611a1c8ba29e4bf01f7a"} Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.396009 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" event={"ID":"18ca6546-69fd-492d-81c5-bb18c56b045d","Type":"ContainerStarted","Data":"3e6a529b000841e709c2e1d05c4d119a28c8f20c2ece39574181d8df78a6c626"} Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.409447 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-668f7d7fb5-l5kpq"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.482154 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cc2bd68-d5b3-4416-8752-edf7ee85bf88" path="/var/lib/kubelet/pods/7cc2bd68-d5b3-4416-8752-edf7ee85bf88/volumes" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.482812 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2f6g5" event={"ID":"5a2a904f-47f9-40da-bc5f-aba73c4c1c57","Type":"ContainerStarted","Data":"e36236dacc44e9719f0a5616b325da89fd715c826d97ed6b2c660301840187d2"} Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.482842 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.482859 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4be760b2-263c-4b89-8bdf-ecf98114a24f","Type":"ContainerStarted","Data":"69118f89362c4dfde4f05ac6e0fde30afdc3fb74ce3c8ffeaee8a6df7e8789a5"} Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.494192 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7jdcp" event={"ID":"36fe6fdb-2970-4773-8184-a2d16b8ca89a","Type":"ContainerStarted","Data":"63add7d4ad95ec66116fce5bf4ebf1368a0ffff3203d7f02a7456da5980c8ad7"} Feb 19 15:28:51 crc kubenswrapper[4810]: W0219 15:28:51.502002 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cd44d0_7395_44e1_9112_9e8bb4198b93.slice/crio-89e242fd2781af0141617d648aab74ae173b61b87fc7bb8cdef6002ffaed43fc WatchSource:0}: Error finding container 89e242fd2781af0141617d648aab74ae173b61b87fc7bb8cdef6002ffaed43fc: Status 404 returned error can't find the container with id 89e242fd2781af0141617d648aab74ae173b61b87fc7bb8cdef6002ffaed43fc Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.512551 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.514810 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08eca88c-a4b4-461b-8568-ebbf54645272-logs\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.514940 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb5qb\" (UniqueName: \"kubernetes.io/projected/08eca88c-a4b4-461b-8568-ebbf54645272-kube-api-access-wb5qb\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.514967 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08eca88c-a4b4-461b-8568-ebbf54645272-config-data\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.514981 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/08eca88c-a4b4-461b-8568-ebbf54645272-horizon-secret-key\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.514997 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08eca88c-a4b4-461b-8568-ebbf54645272-scripts\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.524131 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.531559 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-849c785789-5xrh2" event={"ID":"c39b1dd9-9e73-4cca-aea6-e228f1ba5942","Type":"ContainerStarted","Data":"f7e61fd52ad6569907f8a84cbc32aef547486ffda35028466938ada8d5e3aa10"} Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.556573 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc","Type":"ContainerStarted","Data":"c6f5d9b5a6b15c45dc5d760616281a4656e6d239a89c530245a55061c13bc709"} Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.556624 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc","Type":"ContainerStarted","Data":"5e6853b5e9878d2a03e52d891d8a223a4096ce551b9935e29c1d5c5f37ac41cb"} Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.556639 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc","Type":"ContainerStarted","Data":"1347fa88cd5df9f6d1e59a77fbeedca11e777e87af7e22e44f3db90ca1dd624c"} Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.557693 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.577069 4810 generic.go:334] "Generic (PLEG): container finished" podID="32e59d75-7087-41ab-8571-5e8830baeec0" containerID="d88c68d698d3972f0825f45fc0f2b6d882a24f69749acf74ad5f3d90f016e7f9" exitCode=0 Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.577184 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" event={"ID":"32e59d75-7087-41ab-8571-5e8830baeec0","Type":"ContainerDied","Data":"d88c68d698d3972f0825f45fc0f2b6d882a24f69749acf74ad5f3d90f016e7f9"} Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.585522 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": dial tcp 10.217.0.152:9322: connect: connection refused" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.595674 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-svmgl" event={"ID":"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5","Type":"ContainerStarted","Data":"ce811eb96390a8f8365be6dee0926b85f5365cb868957ed22165ebbf7d343712"} Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.601598 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j989d" event={"ID":"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1","Type":"ContainerStarted","Data":"af9963a31801a20460aedc2d93f7da81f8b9a7c2e7ec298ae21e257191169331"} Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.618233 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb5qb\" (UniqueName: \"kubernetes.io/projected/08eca88c-a4b4-461b-8568-ebbf54645272-kube-api-access-wb5qb\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.618295 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08eca88c-a4b4-461b-8568-ebbf54645272-config-data\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.618316 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/08eca88c-a4b4-461b-8568-ebbf54645272-horizon-secret-key\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.618357 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08eca88c-a4b4-461b-8568-ebbf54645272-scripts\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.618512 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08eca88c-a4b4-461b-8568-ebbf54645272-logs\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.620672 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08eca88c-a4b4-461b-8568-ebbf54645272-config-data\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.622104 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08eca88c-a4b4-461b-8568-ebbf54645272-logs\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.624387 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08eca88c-a4b4-461b-8568-ebbf54645272-scripts\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.629934 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/08eca88c-a4b4-461b-8568-ebbf54645272-horizon-secret-key\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.645116 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.652952 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb5qb\" (UniqueName: \"kubernetes.io/projected/08eca88c-a4b4-461b-8568-ebbf54645272-kube-api-access-wb5qb\") pod \"horizon-668f7d7fb5-l5kpq\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.722185 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:28:51 crc kubenswrapper[4810]: I0219 15:28:51.925244 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2f6g5" podStartSLOduration=3.925229479 podStartE2EDuration="3.925229479s" podCreationTimestamp="2026-02-19 15:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:51.924648905 +0000 UTC m=+1161.406679029" watchObservedRunningTime="2026-02-19 15:28:51.925229479 +0000 UTC m=+1161.407259603" Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.043292 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.043270839 podStartE2EDuration="4.043270839s" podCreationTimestamp="2026-02-19 15:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:51.994006053 +0000 UTC m=+1161.476036177" watchObservedRunningTime="2026-02-19 15:28:52.043270839 +0000 UTC m=+1161.525300963" Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.067471 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.080641 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.520050 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-668f7d7fb5-l5kpq"] Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.618644 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j989d" event={"ID":"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1","Type":"ContainerStarted","Data":"da258067ce2c7912909dc7c937b6ad45df02bf5c8504937ad6d6f0ea0359724a"} Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.625132 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c2f952c-2122-43d9-b006-6967fd2b9029","Type":"ContainerStarted","Data":"84ac3613a017c9c6f49ab1cf6c99c11f9ee32460de5f526447f4c9de546b4b3b"} Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.629551 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fbd85f69f-5jhnw" event={"ID":"978c1383-cc82-4788-beea-b1e15b25eb1f","Type":"ContainerStarted","Data":"a02086ae9fd1e62e23bb222b4fb6d0999b14d605294e6aa8f70c1665e61a1617"} Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.636436 4810 generic.go:334] "Generic (PLEG): container finished" podID="18ca6546-69fd-492d-81c5-bb18c56b045d" containerID="7bc85bb988b4afd58694705b3dae68dca31213574b00f92c71f4c77c5edfdf98" exitCode=0 Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.636519 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" event={"ID":"18ca6546-69fd-492d-81c5-bb18c56b045d","Type":"ContainerDied","Data":"7bc85bb988b4afd58694705b3dae68dca31213574b00f92c71f4c77c5edfdf98"} Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.642122 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8cd44d0-7395-44e1-9112-9e8bb4198b93","Type":"ContainerStarted","Data":"89e242fd2781af0141617d648aab74ae173b61b87fc7bb8cdef6002ffaed43fc"} Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.642706 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api-log" containerID="cri-o://5e6853b5e9878d2a03e52d891d8a223a4096ce551b9935e29c1d5c5f37ac41cb" gracePeriod=30 Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.642831 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api" containerID="cri-o://c6f5d9b5a6b15c45dc5d760616281a4656e6d239a89c530245a55061c13bc709" gracePeriod=30 Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.648995 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 15:28:52 crc kubenswrapper[4810]: I0219 15:28:52.676019 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-j989d" podStartSLOduration=4.675999348 podStartE2EDuration="4.675999348s" podCreationTimestamp="2026-02-19 15:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:28:52.635925017 +0000 UTC m=+1162.117955141" watchObservedRunningTime="2026-02-19 15:28:52.675999348 +0000 UTC m=+1162.158029472" Feb 19 15:28:53 crc kubenswrapper[4810]: I0219 15:28:53.654450 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a0789f1-099a-4f95-9626-a5ad7da804bc","Type":"ContainerStarted","Data":"039aba0ed63f1dd6f66e34144e1de83684475b75135af6fd57bb70ce97ffc6b0"} Feb 19 15:28:53 crc kubenswrapper[4810]: I0219 15:28:53.656869 4810 generic.go:334] "Generic (PLEG): container finished" podID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerID="5e6853b5e9878d2a03e52d891d8a223a4096ce551b9935e29c1d5c5f37ac41cb" exitCode=143 Feb 19 15:28:53 crc kubenswrapper[4810]: I0219 15:28:53.656950 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc","Type":"ContainerDied","Data":"5e6853b5e9878d2a03e52d891d8a223a4096ce551b9935e29c1d5c5f37ac41cb"} Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.124593 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 15:28:54 crc kubenswrapper[4810]: W0219 15:28:54.613876 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08eca88c_a4b4_461b_8568_ebbf54645272.slice/crio-b936d9f6773d6dc856145fa11df03aba298425ab6b8a943cdd257e7a943da84e WatchSource:0}: Error finding container b936d9f6773d6dc856145fa11df03aba298425ab6b8a943cdd257e7a943da84e: Status 404 returned error can't find the container with id b936d9f6773d6dc856145fa11df03aba298425ab6b8a943cdd257e7a943da84e Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.674282 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c2f952c-2122-43d9-b006-6967fd2b9029","Type":"ContainerStarted","Data":"ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93"} Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.676436 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" event={"ID":"32e59d75-7087-41ab-8571-5e8830baeec0","Type":"ContainerDied","Data":"f5ecac10f7282bb7c7be67fe8fb7362bec2d603a414fe0b250f0f959893cb30e"} Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.676458 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5ecac10f7282bb7c7be67fe8fb7362bec2d603a414fe0b250f0f959893cb30e" Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.679586 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-668f7d7fb5-l5kpq" event={"ID":"08eca88c-a4b4-461b-8568-ebbf54645272","Type":"ContainerStarted","Data":"b936d9f6773d6dc856145fa11df03aba298425ab6b8a943cdd257e7a943da84e"} Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.727693 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.801562 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-ovsdbserver-nb\") pod \"32e59d75-7087-41ab-8571-5e8830baeec0\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.801833 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-config\") pod \"32e59d75-7087-41ab-8571-5e8830baeec0\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.801859 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-dns-swift-storage-0\") pod \"32e59d75-7087-41ab-8571-5e8830baeec0\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.801894 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-dns-svc\") pod \"32e59d75-7087-41ab-8571-5e8830baeec0\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.801922 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-ovsdbserver-sb\") pod \"32e59d75-7087-41ab-8571-5e8830baeec0\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.802047 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf6bv\" (UniqueName: \"kubernetes.io/projected/32e59d75-7087-41ab-8571-5e8830baeec0-kube-api-access-cf6bv\") pod \"32e59d75-7087-41ab-8571-5e8830baeec0\" (UID: \"32e59d75-7087-41ab-8571-5e8830baeec0\") " Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.820278 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32e59d75-7087-41ab-8571-5e8830baeec0-kube-api-access-cf6bv" (OuterVolumeSpecName: "kube-api-access-cf6bv") pod "32e59d75-7087-41ab-8571-5e8830baeec0" (UID: "32e59d75-7087-41ab-8571-5e8830baeec0"). InnerVolumeSpecName "kube-api-access-cf6bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.828410 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "32e59d75-7087-41ab-8571-5e8830baeec0" (UID: "32e59d75-7087-41ab-8571-5e8830baeec0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.835737 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32e59d75-7087-41ab-8571-5e8830baeec0" (UID: "32e59d75-7087-41ab-8571-5e8830baeec0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.855711 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "32e59d75-7087-41ab-8571-5e8830baeec0" (UID: "32e59d75-7087-41ab-8571-5e8830baeec0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.859379 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "32e59d75-7087-41ab-8571-5e8830baeec0" (UID: "32e59d75-7087-41ab-8571-5e8830baeec0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.865992 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-config" (OuterVolumeSpecName: "config") pod "32e59d75-7087-41ab-8571-5e8830baeec0" (UID: "32e59d75-7087-41ab-8571-5e8830baeec0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.908699 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.908738 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf6bv\" (UniqueName: \"kubernetes.io/projected/32e59d75-7087-41ab-8571-5e8830baeec0-kube-api-access-cf6bv\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.908749 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.908758 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.908766 4810 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:54 crc kubenswrapper[4810]: I0219 15:28:54.908774 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32e59d75-7087-41ab-8571-5e8830baeec0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:28:55 crc kubenswrapper[4810]: I0219 15:28:55.454192 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 15:28:55 crc kubenswrapper[4810]: I0219 15:28:55.702300 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744c9b5447-bdbq7" Feb 19 15:28:55 crc kubenswrapper[4810]: I0219 15:28:55.750894 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744c9b5447-bdbq7"] Feb 19 15:28:55 crc kubenswrapper[4810]: I0219 15:28:55.764351 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744c9b5447-bdbq7"] Feb 19 15:28:56 crc kubenswrapper[4810]: I0219 15:28:56.432563 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": read tcp 10.217.0.2:45822->10.217.0.152:9322: read: connection reset by peer" Feb 19 15:28:56 crc kubenswrapper[4810]: I0219 15:28:56.716978 4810 generic.go:334] "Generic (PLEG): container finished" podID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerID="c6f5d9b5a6b15c45dc5d760616281a4656e6d239a89c530245a55061c13bc709" exitCode=0 Feb 19 15:28:56 crc kubenswrapper[4810]: I0219 15:28:56.717036 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc","Type":"ContainerDied","Data":"c6f5d9b5a6b15c45dc5d760616281a4656e6d239a89c530245a55061c13bc709"} Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.456775 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32e59d75-7087-41ab-8571-5e8830baeec0" path="/var/lib/kubelet/pods/32e59d75-7087-41ab-8571-5e8830baeec0/volumes" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.746301 4810 generic.go:334] "Generic (PLEG): container finished" podID="5a2a904f-47f9-40da-bc5f-aba73c4c1c57" containerID="e36236dacc44e9719f0a5616b325da89fd715c826d97ed6b2c660301840187d2" exitCode=0 Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.746353 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2f6g5" event={"ID":"5a2a904f-47f9-40da-bc5f-aba73c4c1c57","Type":"ContainerDied","Data":"e36236dacc44e9719f0a5616b325da89fd715c826d97ed6b2c660301840187d2"} Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.771118 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-849c785789-5xrh2"] Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.815943 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-869f57798-ngdtl"] Feb 19 15:28:57 crc kubenswrapper[4810]: E0219 15:28:57.816531 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32e59d75-7087-41ab-8571-5e8830baeec0" containerName="init" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.816551 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="32e59d75-7087-41ab-8571-5e8830baeec0" containerName="init" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.816816 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="32e59d75-7087-41ab-8571-5e8830baeec0" containerName="init" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.818036 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.822792 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.825395 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-869f57798-ngdtl"] Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.877239 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-horizon-secret-key\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.877294 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-combined-ca-bundle\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.877319 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c845f2-0069-4ee5-9d4b-b5871e078926-logs\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.877458 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58c845f2-0069-4ee5-9d4b-b5871e078926-config-data\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.877503 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58c845f2-0069-4ee5-9d4b-b5871e078926-scripts\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.877543 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-horizon-tls-certs\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.877634 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjb4j\" (UniqueName: \"kubernetes.io/projected/58c845f2-0069-4ee5-9d4b-b5871e078926-kube-api-access-zjb4j\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.898715 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-668f7d7fb5-l5kpq"] Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.930191 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5f56498b8d-9gwmf"] Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.932958 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.953945 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f56498b8d-9gwmf"] Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982359 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/737d6629-747f-4d16-a545-d0070c20fe5d-horizon-secret-key\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982422 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjb4j\" (UniqueName: \"kubernetes.io/projected/58c845f2-0069-4ee5-9d4b-b5871e078926-kube-api-access-zjb4j\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982493 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-horizon-secret-key\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982509 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-combined-ca-bundle\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982526 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c845f2-0069-4ee5-9d4b-b5871e078926-logs\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982566 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mnzp\" (UniqueName: \"kubernetes.io/projected/737d6629-747f-4d16-a545-d0070c20fe5d-kube-api-access-5mnzp\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982585 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/737d6629-747f-4d16-a545-d0070c20fe5d-logs\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982654 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/737d6629-747f-4d16-a545-d0070c20fe5d-horizon-tls-certs\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982687 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737d6629-747f-4d16-a545-d0070c20fe5d-combined-ca-bundle\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982707 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/737d6629-747f-4d16-a545-d0070c20fe5d-config-data\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982736 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58c845f2-0069-4ee5-9d4b-b5871e078926-config-data\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982756 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/737d6629-747f-4d16-a545-d0070c20fe5d-scripts\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982782 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58c845f2-0069-4ee5-9d4b-b5871e078926-scripts\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.982835 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-horizon-tls-certs\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.987763 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c845f2-0069-4ee5-9d4b-b5871e078926-logs\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.988438 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58c845f2-0069-4ee5-9d4b-b5871e078926-scripts\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.988800 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58c845f2-0069-4ee5-9d4b-b5871e078926-config-data\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:57 crc kubenswrapper[4810]: I0219 15:28:57.995922 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-horizon-tls-certs\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.009482 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-horizon-secret-key\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.010018 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-combined-ca-bundle\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.024892 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjb4j\" (UniqueName: \"kubernetes.io/projected/58c845f2-0069-4ee5-9d4b-b5871e078926-kube-api-access-zjb4j\") pod \"horizon-869f57798-ngdtl\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.084247 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/737d6629-747f-4d16-a545-d0070c20fe5d-horizon-secret-key\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.084499 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mnzp\" (UniqueName: \"kubernetes.io/projected/737d6629-747f-4d16-a545-d0070c20fe5d-kube-api-access-5mnzp\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.084536 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/737d6629-747f-4d16-a545-d0070c20fe5d-logs\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.084566 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/737d6629-747f-4d16-a545-d0070c20fe5d-horizon-tls-certs\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.084602 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737d6629-747f-4d16-a545-d0070c20fe5d-combined-ca-bundle\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.084629 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/737d6629-747f-4d16-a545-d0070c20fe5d-config-data\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.084677 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/737d6629-747f-4d16-a545-d0070c20fe5d-scripts\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.085827 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/737d6629-747f-4d16-a545-d0070c20fe5d-scripts\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.091741 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/737d6629-747f-4d16-a545-d0070c20fe5d-horizon-tls-certs\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.092455 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/737d6629-747f-4d16-a545-d0070c20fe5d-horizon-secret-key\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.092781 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/737d6629-747f-4d16-a545-d0070c20fe5d-logs\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.093016 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/737d6629-747f-4d16-a545-d0070c20fe5d-config-data\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.100048 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737d6629-747f-4d16-a545-d0070c20fe5d-combined-ca-bundle\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.105082 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mnzp\" (UniqueName: \"kubernetes.io/projected/737d6629-747f-4d16-a545-d0070c20fe5d-kube-api-access-5mnzp\") pod \"horizon-5f56498b8d-9gwmf\" (UID: \"737d6629-747f-4d16-a545-d0070c20fe5d\") " pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.161503 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:28:58 crc kubenswrapper[4810]: I0219 15:28:58.271981 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:28:59 crc kubenswrapper[4810]: I0219 15:28:59.125374 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": dial tcp 10.217.0.152:9322: connect: connection refused" Feb 19 15:29:04 crc kubenswrapper[4810]: I0219 15:29:04.127580 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": dial tcp 10.217.0.152:9322: connect: connection refused" Feb 19 15:29:04 crc kubenswrapper[4810]: I0219 15:29:04.128515 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 15:29:07 crc kubenswrapper[4810]: E0219 15:29:07.475512 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 19 15:29:07 crc kubenswrapper[4810]: E0219 15:29:07.475889 4810 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 19 15:29:07 crc kubenswrapper[4810]: E0219 15:29:07.476050 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.159:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc4h668h666h568hb4h5ddh659h67fhb5h687h599h678hd7h5cbh586h6bh74h546h68dhch97h574h55bh79h59fh96h9fh68bh656hf4h64h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd77w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b8cd44d0-7395-44e1-9112-9e8bb4198b93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.125934 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": dial tcp 10.217.0.152:9322: connect: connection refused" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.564102 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:29:09 crc kubenswrapper[4810]: E0219 15:29:09.571311 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 19 15:29:09 crc kubenswrapper[4810]: E0219 15:29:09.571371 4810 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 19 15:29:09 crc kubenswrapper[4810]: E0219 15:29:09.571487 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.159:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh59h668h67h5b5h5b9h575h578h548h644h56bhc4h557h674h98h74h7chc5h66h68h566h9dh5cch68fh577h8fh695h544h64ch5fh9h9fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4ql7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5fbd85f69f-5jhnw_openstack(978c1383-cc82-4788-beea-b1e15b25eb1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 15:29:09 crc kubenswrapper[4810]: E0219 15:29:09.576618 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.159:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-5fbd85f69f-5jhnw" podUID="978c1383-cc82-4788-beea-b1e15b25eb1f" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.719170 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-scripts\") pod \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.719215 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-combined-ca-bundle\") pod \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.719239 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-credential-keys\") pod \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.719349 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-fernet-keys\") pod \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.719499 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-config-data\") pod \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.719524 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlnqn\" (UniqueName: \"kubernetes.io/projected/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-kube-api-access-qlnqn\") pod \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\" (UID: \"5a2a904f-47f9-40da-bc5f-aba73c4c1c57\") " Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.728075 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-kube-api-access-qlnqn" (OuterVolumeSpecName: "kube-api-access-qlnqn") pod "5a2a904f-47f9-40da-bc5f-aba73c4c1c57" (UID: "5a2a904f-47f9-40da-bc5f-aba73c4c1c57"). InnerVolumeSpecName "kube-api-access-qlnqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.728455 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5a2a904f-47f9-40da-bc5f-aba73c4c1c57" (UID: "5a2a904f-47f9-40da-bc5f-aba73c4c1c57"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.728593 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5a2a904f-47f9-40da-bc5f-aba73c4c1c57" (UID: "5a2a904f-47f9-40da-bc5f-aba73c4c1c57"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.752582 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-scripts" (OuterVolumeSpecName: "scripts") pod "5a2a904f-47f9-40da-bc5f-aba73c4c1c57" (UID: "5a2a904f-47f9-40da-bc5f-aba73c4c1c57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.758662 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a2a904f-47f9-40da-bc5f-aba73c4c1c57" (UID: "5a2a904f-47f9-40da-bc5f-aba73c4c1c57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.761452 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-config-data" (OuterVolumeSpecName: "config-data") pod "5a2a904f-47f9-40da-bc5f-aba73c4c1c57" (UID: "5a2a904f-47f9-40da-bc5f-aba73c4c1c57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.822118 4810 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.822143 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.822154 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlnqn\" (UniqueName: \"kubernetes.io/projected/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-kube-api-access-qlnqn\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.822164 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.822171 4810 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.822179 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2a904f-47f9-40da-bc5f-aba73c4c1c57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.915123 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2f6g5" Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.915339 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2f6g5" event={"ID":"5a2a904f-47f9-40da-bc5f-aba73c4c1c57","Type":"ContainerDied","Data":"e83e19f0a88b6d8caed6cea2c56d5be687bd87f6e4bb330f1df96f263a9c3db4"} Feb 19 15:29:09 crc kubenswrapper[4810]: I0219 15:29:09.915367 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e83e19f0a88b6d8caed6cea2c56d5be687bd87f6e4bb330f1df96f263a9c3db4" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.647221 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2f6g5"] Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.654972 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2f6g5"] Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.747422 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-thnc7"] Feb 19 15:29:10 crc kubenswrapper[4810]: E0219 15:29:10.747769 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2a904f-47f9-40da-bc5f-aba73c4c1c57" containerName="keystone-bootstrap" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.747786 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2a904f-47f9-40da-bc5f-aba73c4c1c57" containerName="keystone-bootstrap" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.747996 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a2a904f-47f9-40da-bc5f-aba73c4c1c57" containerName="keystone-bootstrap" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.748712 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.751581 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.751599 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.751659 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.751686 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.751773 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j78zz" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.757764 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-thnc7"] Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.948241 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-config-data\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.948628 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-scripts\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.948721 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-fernet-keys\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.948746 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxtzf\" (UniqueName: \"kubernetes.io/projected/92797675-ddf7-43cf-90af-0248cf097509-kube-api-access-gxtzf\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.949018 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-combined-ca-bundle\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:10 crc kubenswrapper[4810]: I0219 15:29:10.949164 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-credential-keys\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.050549 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-fernet-keys\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.050591 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxtzf\" (UniqueName: \"kubernetes.io/projected/92797675-ddf7-43cf-90af-0248cf097509-kube-api-access-gxtzf\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.050662 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-combined-ca-bundle\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.050707 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-credential-keys\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.050733 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-config-data\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.050758 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-scripts\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.058053 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-credential-keys\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.058136 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-scripts\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.059523 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-config-data\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.059980 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-combined-ca-bundle\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.068069 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-fernet-keys\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.070758 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxtzf\" (UniqueName: \"kubernetes.io/projected/92797675-ddf7-43cf-90af-0248cf097509-kube-api-access-gxtzf\") pod \"keystone-bootstrap-thnc7\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.096909 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:11 crc kubenswrapper[4810]: I0219 15:29:11.456517 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a2a904f-47f9-40da-bc5f-aba73c4c1c57" path="/var/lib/kubelet/pods/5a2a904f-47f9-40da-bc5f-aba73c4c1c57/volumes" Feb 19 15:29:18 crc kubenswrapper[4810]: E0219 15:29:18.933578 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 19 15:29:18 crc kubenswrapper[4810]: E0219 15:29:18.934144 4810 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 19 15:29:18 crc kubenswrapper[4810]: E0219 15:29:18.934248 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.159:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btmxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-hmc6k_openstack(2024a783-c3f9-4e57-b00f-52bec164e64e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 15:29:18 crc kubenswrapper[4810]: E0219 15:29:18.935397 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-hmc6k" podUID="2024a783-c3f9-4e57-b00f-52bec164e64e" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.010790 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc","Type":"ContainerDied","Data":"1347fa88cd5df9f6d1e59a77fbeedca11e777e87af7e22e44f3db90ca1dd624c"} Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.010849 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1347fa88cd5df9f6d1e59a77fbeedca11e777e87af7e22e44f3db90ca1dd624c" Feb 19 15:29:19 crc kubenswrapper[4810]: E0219 15:29:19.018590 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.159:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-hmc6k" podUID="2024a783-c3f9-4e57-b00f-52bec164e64e" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.076587 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.086523 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.125686 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.217856 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-combined-ca-bundle\") pod \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.217922 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-config-data\") pod \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.217973 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/978c1383-cc82-4788-beea-b1e15b25eb1f-scripts\") pod \"978c1383-cc82-4788-beea-b1e15b25eb1f\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.218001 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-custom-prometheus-ca\") pod \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.218031 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/978c1383-cc82-4788-beea-b1e15b25eb1f-logs\") pod \"978c1383-cc82-4788-beea-b1e15b25eb1f\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.218057 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/978c1383-cc82-4788-beea-b1e15b25eb1f-horizon-secret-key\") pod \"978c1383-cc82-4788-beea-b1e15b25eb1f\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.219200 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfnzr\" (UniqueName: \"kubernetes.io/projected/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-kube-api-access-mfnzr\") pod \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.218527 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/978c1383-cc82-4788-beea-b1e15b25eb1f-logs" (OuterVolumeSpecName: "logs") pod "978c1383-cc82-4788-beea-b1e15b25eb1f" (UID: "978c1383-cc82-4788-beea-b1e15b25eb1f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.219142 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/978c1383-cc82-4788-beea-b1e15b25eb1f-scripts" (OuterVolumeSpecName: "scripts") pod "978c1383-cc82-4788-beea-b1e15b25eb1f" (UID: "978c1383-cc82-4788-beea-b1e15b25eb1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.219220 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/978c1383-cc82-4788-beea-b1e15b25eb1f-config-data\") pod \"978c1383-cc82-4788-beea-b1e15b25eb1f\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.219613 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4ql7\" (UniqueName: \"kubernetes.io/projected/978c1383-cc82-4788-beea-b1e15b25eb1f-kube-api-access-h4ql7\") pod \"978c1383-cc82-4788-beea-b1e15b25eb1f\" (UID: \"978c1383-cc82-4788-beea-b1e15b25eb1f\") " Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.219651 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-logs\") pod \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\" (UID: \"2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc\") " Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.220434 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-logs" (OuterVolumeSpecName: "logs") pod "2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" (UID: "2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.220690 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.220703 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/978c1383-cc82-4788-beea-b1e15b25eb1f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.220712 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/978c1383-cc82-4788-beea-b1e15b25eb1f-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.221132 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/978c1383-cc82-4788-beea-b1e15b25eb1f-config-data" (OuterVolumeSpecName: "config-data") pod "978c1383-cc82-4788-beea-b1e15b25eb1f" (UID: "978c1383-cc82-4788-beea-b1e15b25eb1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.221359 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/978c1383-cc82-4788-beea-b1e15b25eb1f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "978c1383-cc82-4788-beea-b1e15b25eb1f" (UID: "978c1383-cc82-4788-beea-b1e15b25eb1f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.222532 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-kube-api-access-mfnzr" (OuterVolumeSpecName: "kube-api-access-mfnzr") pod "2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" (UID: "2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc"). InnerVolumeSpecName "kube-api-access-mfnzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.225484 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/978c1383-cc82-4788-beea-b1e15b25eb1f-kube-api-access-h4ql7" (OuterVolumeSpecName: "kube-api-access-h4ql7") pod "978c1383-cc82-4788-beea-b1e15b25eb1f" (UID: "978c1383-cc82-4788-beea-b1e15b25eb1f"). InnerVolumeSpecName "kube-api-access-h4ql7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.243712 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" (UID: "2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.248164 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" (UID: "2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.266873 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-config-data" (OuterVolumeSpecName: "config-data") pod "2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" (UID: "2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.325437 4810 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/978c1383-cc82-4788-beea-b1e15b25eb1f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.325466 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/978c1383-cc82-4788-beea-b1e15b25eb1f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.325476 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfnzr\" (UniqueName: \"kubernetes.io/projected/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-kube-api-access-mfnzr\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.325486 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4ql7\" (UniqueName: \"kubernetes.io/projected/978c1383-cc82-4788-beea-b1e15b25eb1f-kube-api-access-h4ql7\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.325494 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.325502 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.325509 4810 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.538176 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.538239 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.538278 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.539029 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37fe95e370faa9fca4a69499713730a8ba7e7939f57cd237ea9a505f9b09a6bf"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:29:19 crc kubenswrapper[4810]: I0219 15:29:19.539083 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://37fe95e370faa9fca4a69499713730a8ba7e7939f57cd237ea9a505f9b09a6bf" gracePeriod=600 Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.033372 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fbd85f69f-5jhnw" event={"ID":"978c1383-cc82-4788-beea-b1e15b25eb1f","Type":"ContainerDied","Data":"a02086ae9fd1e62e23bb222b4fb6d0999b14d605294e6aa8f70c1665e61a1617"} Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.033460 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fbd85f69f-5jhnw" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.037452 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="37fe95e370faa9fca4a69499713730a8ba7e7939f57cd237ea9a505f9b09a6bf" exitCode=0 Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.037521 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"37fe95e370faa9fca4a69499713730a8ba7e7939f57cd237ea9a505f9b09a6bf"} Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.037551 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.037611 4810 scope.go:117] "RemoveContainer" containerID="6c88e0127771a4aa28c6261d9a83da29a3f930023146271a9d942e738f8152ff" Feb 19 15:29:20 crc kubenswrapper[4810]: E0219 15:29:20.058903 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 19 15:29:20 crc kubenswrapper[4810]: E0219 15:29:20.058951 4810 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 19 15:29:20 crc kubenswrapper[4810]: E0219 15:29:20.059070 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.159:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqnf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-svmgl_openstack(848dfe9d-05f4-4ba9-919e-23e9a7ae63d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 15:29:20 crc kubenswrapper[4810]: E0219 15:29:20.062291 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-svmgl" podUID="848dfe9d-05f4-4ba9-919e-23e9a7ae63d5" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.072233 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.082199 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.129995 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:29:20 crc kubenswrapper[4810]: E0219 15:29:20.130414 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api-log" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.130429 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api-log" Feb 19 15:29:20 crc kubenswrapper[4810]: E0219 15:29:20.130453 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.130459 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.130636 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api-log" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.130657 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" containerName="watcher-api" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.131670 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.134606 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.137859 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fbd85f69f-5jhnw"] Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.151248 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5fbd85f69f-5jhnw"] Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.161779 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.239048 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd961c7d-d551-4f5b-a08a-07d088947698-logs\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.239104 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdn2x\" (UniqueName: \"kubernetes.io/projected/cd961c7d-d551-4f5b-a08a-07d088947698-kube-api-access-zdn2x\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.239304 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.239425 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.239573 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-config-data\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.341146 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-config-data\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.341274 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd961c7d-d551-4f5b-a08a-07d088947698-logs\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.341349 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdn2x\" (UniqueName: \"kubernetes.io/projected/cd961c7d-d551-4f5b-a08a-07d088947698-kube-api-access-zdn2x\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.341418 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.341450 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.342301 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd961c7d-d551-4f5b-a08a-07d088947698-logs\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.348629 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.348712 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.350498 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-config-data\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.360617 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdn2x\" (UniqueName: \"kubernetes.io/projected/cd961c7d-d551-4f5b-a08a-07d088947698-kube-api-access-zdn2x\") pod \"watcher-api-0\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.449083 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:29:20 crc kubenswrapper[4810]: I0219 15:29:20.926174 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f56498b8d-9gwmf"] Feb 19 15:29:20 crc kubenswrapper[4810]: W0219 15:29:20.957679 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod737d6629_747f_4d16_a545_d0070c20fe5d.slice/crio-da90b84310b3d5abeea30fa57f1618f55f0eafa5818785fd1f92d533a8fff76b WatchSource:0}: Error finding container da90b84310b3d5abeea30fa57f1618f55f0eafa5818785fd1f92d533a8fff76b: Status 404 returned error can't find the container with id da90b84310b3d5abeea30fa57f1618f55f0eafa5818785fd1f92d533a8fff76b Feb 19 15:29:21 crc kubenswrapper[4810]: I0219 15:29:21.055864 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" event={"ID":"18ca6546-69fd-492d-81c5-bb18c56b045d","Type":"ContainerStarted","Data":"acf279e70fd332fdcdd2bf83f0303bb99e19cd10482ff8ab44a134fa747add8b"} Feb 19 15:29:21 crc kubenswrapper[4810]: I0219 15:29:21.062814 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:29:21 crc kubenswrapper[4810]: I0219 15:29:21.063899 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f56498b8d-9gwmf" event={"ID":"737d6629-747f-4d16-a545-d0070c20fe5d","Type":"ContainerStarted","Data":"da90b84310b3d5abeea30fa57f1618f55f0eafa5818785fd1f92d533a8fff76b"} Feb 19 15:29:21 crc kubenswrapper[4810]: E0219 15:29:21.065311 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.159:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-svmgl" podUID="848dfe9d-05f4-4ba9-919e-23e9a7ae63d5" Feb 19 15:29:21 crc kubenswrapper[4810]: I0219 15:29:21.106939 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" podStartSLOduration=33.10691847 podStartE2EDuration="33.10691847s" podCreationTimestamp="2026-02-19 15:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:21.077708313 +0000 UTC m=+1190.559738437" watchObservedRunningTime="2026-02-19 15:29:21.10691847 +0000 UTC m=+1190.588948594" Feb 19 15:29:21 crc kubenswrapper[4810]: I0219 15:29:21.126666 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-869f57798-ngdtl"] Feb 19 15:29:21 crc kubenswrapper[4810]: I0219 15:29:21.292676 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-thnc7"] Feb 19 15:29:21 crc kubenswrapper[4810]: I0219 15:29:21.401723 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:29:21 crc kubenswrapper[4810]: I0219 15:29:21.456151 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc" path="/var/lib/kubelet/pods/2760e5f1-9d54-4ae0-a36e-344cd4f2e2cc/volumes" Feb 19 15:29:21 crc kubenswrapper[4810]: I0219 15:29:21.457437 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="978c1383-cc82-4788-beea-b1e15b25eb1f" path="/var/lib/kubelet/pods/978c1383-cc82-4788-beea-b1e15b25eb1f/volumes" Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.077884 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-869f57798-ngdtl" event={"ID":"58c845f2-0069-4ee5-9d4b-b5871e078926","Type":"ContainerStarted","Data":"5ab1bae28a55f588686fefd9b6e6ee98c22d6657796a662570ae5cd62319bd13"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.078299 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-869f57798-ngdtl" event={"ID":"58c845f2-0069-4ee5-9d4b-b5871e078926","Type":"ContainerStarted","Data":"b7473c6c07a1c77d67ffe62af3e5c262ab61dca816caf8aab0acb14dc5b23ebd"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.078309 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-869f57798-ngdtl" event={"ID":"58c845f2-0069-4ee5-9d4b-b5871e078926","Type":"ContainerStarted","Data":"dd8cab14dea5221ed6bc57de9b6e6053cd08e7d2f18677d44feb73bc0f3396df"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.083534 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4be760b2-263c-4b89-8bdf-ecf98114a24f","Type":"ContainerStarted","Data":"8c99f71d93423dbb260d03502181a976def39a613d97465a06298513d67bb0bf"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.091127 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cd961c7d-d551-4f5b-a08a-07d088947698","Type":"ContainerStarted","Data":"aa9a5e6b8de561c312023bc0224fd25e600c4ac446d6ec5ee19e031c464523e1"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.091171 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cd961c7d-d551-4f5b-a08a-07d088947698","Type":"ContainerStarted","Data":"4368969b33587354836e3cdb6c31f7bd645f510dc4e3c262614ef8da31c0eb92"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.091181 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cd961c7d-d551-4f5b-a08a-07d088947698","Type":"ContainerStarted","Data":"3ca7fe1f4f8bad9a3a06d89d4141ff28e32b10f6d3445d4b3f2404b6e71c942f"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.092189 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.093977 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cd961c7d-d551-4f5b-a08a-07d088947698" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.168:9322/\": dial tcp 10.217.0.168:9322: connect: connection refused" Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.098378 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7jdcp" event={"ID":"36fe6fdb-2970-4773-8184-a2d16b8ca89a","Type":"ContainerStarted","Data":"829a51aca23df8d8763078bcae4b4cba43b6c265996ab11fc55f6d42ce950516"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.100568 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-869f57798-ngdtl" podStartSLOduration=25.100550056 podStartE2EDuration="25.100550056s" podCreationTimestamp="2026-02-19 15:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:22.095615815 +0000 UTC m=+1191.577645929" watchObservedRunningTime="2026-02-19 15:29:22.100550056 +0000 UTC m=+1191.582580170" Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.105553 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f56498b8d-9gwmf" event={"ID":"737d6629-747f-4d16-a545-d0070c20fe5d","Type":"ContainerStarted","Data":"93ddf2af467af361323be362f93ceed997e447e87fc63962e86036a1907feb9e"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.105605 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f56498b8d-9gwmf" event={"ID":"737d6629-747f-4d16-a545-d0070c20fe5d","Type":"ContainerStarted","Data":"f456c8628f413cbddb39a4ea21eda28481bc8ce0a9fad14f3d8e9c10f3206ddc"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.128566 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c2f952c-2122-43d9-b006-6967fd2b9029","Type":"ContainerStarted","Data":"aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.128792 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9c2f952c-2122-43d9-b006-6967fd2b9029" containerName="glance-log" containerID="cri-o://ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93" gracePeriod=30 Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.129081 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9c2f952c-2122-43d9-b006-6967fd2b9029" containerName="glance-httpd" containerID="cri-o://aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d" gracePeriod=30 Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.150860 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5","Type":"ContainerStarted","Data":"f9469eea900aae9e94121b7851a3eacdafc431e0f15d0e8319f5a93a2616b851"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.153916 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=5.633308836 podStartE2EDuration="34.153897195s" podCreationTimestamp="2026-02-19 15:28:48 +0000 UTC" firstStartedPulling="2026-02-19 15:28:50.395331821 +0000 UTC m=+1159.877361945" lastFinishedPulling="2026-02-19 15:29:18.91592018 +0000 UTC m=+1188.397950304" observedRunningTime="2026-02-19 15:29:22.113584746 +0000 UTC m=+1191.595614870" watchObservedRunningTime="2026-02-19 15:29:22.153897195 +0000 UTC m=+1191.635927309" Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.158021 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8cd44d0-7395-44e1-9112-9e8bb4198b93","Type":"ContainerStarted","Data":"48bd8312dc5f2e91c1a4d6b015bb83960b232d7ff3a764add13cbac66bd0441f"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.182348 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.182315382 podStartE2EDuration="2.182315382s" podCreationTimestamp="2026-02-19 15:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:22.145399257 +0000 UTC m=+1191.627429381" watchObservedRunningTime="2026-02-19 15:29:22.182315382 +0000 UTC m=+1191.664345506" Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.187230 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7jdcp" podStartSLOduration=6.210674157 podStartE2EDuration="34.187220132s" podCreationTimestamp="2026-02-19 15:28:48 +0000 UTC" firstStartedPulling="2026-02-19 15:28:50.989576617 +0000 UTC m=+1160.471606741" lastFinishedPulling="2026-02-19 15:29:18.966122592 +0000 UTC m=+1188.448152716" observedRunningTime="2026-02-19 15:29:22.165473159 +0000 UTC m=+1191.647503283" watchObservedRunningTime="2026-02-19 15:29:22.187220132 +0000 UTC m=+1191.669250256" Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.190640 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7a0789f1-099a-4f95-9626-a5ad7da804bc" containerName="glance-log" containerID="cri-o://039aba0ed63f1dd6f66e34144e1de83684475b75135af6fd57bb70ce97ffc6b0" gracePeriod=30 Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.190667 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a0789f1-099a-4f95-9626-a5ad7da804bc","Type":"ContainerStarted","Data":"1bdbd9fc9e86820ed204a84ba415c42f653c518a62ccc95617b0bc659f34c23c"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.190753 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7a0789f1-099a-4f95-9626-a5ad7da804bc" containerName="glance-httpd" containerID="cri-o://1bdbd9fc9e86820ed204a84ba415c42f653c518a62ccc95617b0bc659f34c23c" gracePeriod=30 Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.217230 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-668f7d7fb5-l5kpq" event={"ID":"08eca88c-a4b4-461b-8568-ebbf54645272","Type":"ContainerStarted","Data":"11bb73290ec186744ef4e88375d87c281860032daa137cc8bd4779ab70117f2b"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.217269 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-668f7d7fb5-l5kpq" event={"ID":"08eca88c-a4b4-461b-8568-ebbf54645272","Type":"ContainerStarted","Data":"cc512cef3a5d05465ad16d8e6be38874fe910640b0911d914b7146b795d387d4"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.217399 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-668f7d7fb5-l5kpq" podUID="08eca88c-a4b4-461b-8568-ebbf54645272" containerName="horizon-log" containerID="cri-o://cc512cef3a5d05465ad16d8e6be38874fe910640b0911d914b7146b795d387d4" gracePeriod=30 Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.217703 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-668f7d7fb5-l5kpq" podUID="08eca88c-a4b4-461b-8568-ebbf54645272" containerName="horizon" containerID="cri-o://11bb73290ec186744ef4e88375d87c281860032daa137cc8bd4779ab70117f2b" gracePeriod=30 Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.233116 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=33.233091678 podStartE2EDuration="33.233091678s" podCreationTimestamp="2026-02-19 15:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:22.229670084 +0000 UTC m=+1191.711700228" watchObservedRunningTime="2026-02-19 15:29:22.233091678 +0000 UTC m=+1191.715121802" Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.241950 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5f56498b8d-9gwmf" podStartSLOduration=25.241923154 podStartE2EDuration="25.241923154s" podCreationTimestamp="2026-02-19 15:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:22.202668981 +0000 UTC m=+1191.684699105" watchObservedRunningTime="2026-02-19 15:29:22.241923154 +0000 UTC m=+1191.723953298" Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.269898 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"2979159f3325af188cf73d374cfc4f7b1a64cb0be10361454a84d92914ce8075"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.290725 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-thnc7" event={"ID":"92797675-ddf7-43cf-90af-0248cf097509","Type":"ContainerStarted","Data":"aa561f23770b052d6b320e47499c0a8789e25a7a2367b69634f88f903c8d780a"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.290775 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-thnc7" event={"ID":"92797675-ddf7-43cf-90af-0248cf097509","Type":"ContainerStarted","Data":"29912d3ef9b6f31b1e73feb89f2d45be94dbb1cafdcd248d477030898edd801f"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.292914 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=34.292894095 podStartE2EDuration="34.292894095s" podCreationTimestamp="2026-02-19 15:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:22.256170814 +0000 UTC m=+1191.738200938" watchObservedRunningTime="2026-02-19 15:29:22.292894095 +0000 UTC m=+1191.774924219" Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.307383 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-849c785789-5xrh2" podUID="c39b1dd9-9e73-4cca-aea6-e228f1ba5942" containerName="horizon-log" containerID="cri-o://09ef2e9038e53c0d3694b2f3cb1b25543038065797746360e5b00060a157df5b" gracePeriod=30 Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.307567 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-849c785789-5xrh2" event={"ID":"c39b1dd9-9e73-4cca-aea6-e228f1ba5942","Type":"ContainerStarted","Data":"267e2a2aa6b67d8303b6404df33bfa4941b4f403604fb949cf5ec932e82ab1b7"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.307588 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-849c785789-5xrh2" event={"ID":"c39b1dd9-9e73-4cca-aea6-e228f1ba5942","Type":"ContainerStarted","Data":"09ef2e9038e53c0d3694b2f3cb1b25543038065797746360e5b00060a157df5b"} Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.307620 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-849c785789-5xrh2" podUID="c39b1dd9-9e73-4cca-aea6-e228f1ba5942" containerName="horizon" containerID="cri-o://267e2a2aa6b67d8303b6404df33bfa4941b4f403604fb949cf5ec932e82ab1b7" gracePeriod=30 Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.323344 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-668f7d7fb5-l5kpq" podStartSLOduration=5.524350402 podStartE2EDuration="31.323307491s" podCreationTimestamp="2026-02-19 15:28:51 +0000 UTC" firstStartedPulling="2026-02-19 15:28:54.625524709 +0000 UTC m=+1164.107554823" lastFinishedPulling="2026-02-19 15:29:20.424481788 +0000 UTC m=+1189.906511912" observedRunningTime="2026-02-19 15:29:22.277993399 +0000 UTC m=+1191.760023523" watchObservedRunningTime="2026-02-19 15:29:22.323307491 +0000 UTC m=+1191.805337615" Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.370347 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=6.420035095 podStartE2EDuration="34.370307624s" podCreationTimestamp="2026-02-19 15:28:48 +0000 UTC" firstStartedPulling="2026-02-19 15:28:50.966562404 +0000 UTC m=+1160.448592528" lastFinishedPulling="2026-02-19 15:29:18.916834933 +0000 UTC m=+1188.398865057" observedRunningTime="2026-02-19 15:29:22.292373682 +0000 UTC m=+1191.774403806" watchObservedRunningTime="2026-02-19 15:29:22.370307624 +0000 UTC m=+1191.852337748" Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.382472 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-thnc7" podStartSLOduration=12.382459112 podStartE2EDuration="12.382459112s" podCreationTimestamp="2026-02-19 15:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:22.306019287 +0000 UTC m=+1191.788049411" watchObservedRunningTime="2026-02-19 15:29:22.382459112 +0000 UTC m=+1191.864489236" Feb 19 15:29:22 crc kubenswrapper[4810]: I0219 15:29:22.402393 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-849c785789-5xrh2" podStartSLOduration=6.36844711 podStartE2EDuration="34.402377301s" podCreationTimestamp="2026-02-19 15:28:48 +0000 UTC" firstStartedPulling="2026-02-19 15:28:50.951545526 +0000 UTC m=+1160.433575650" lastFinishedPulling="2026-02-19 15:29:18.985475717 +0000 UTC m=+1188.467505841" observedRunningTime="2026-02-19 15:29:22.351003171 +0000 UTC m=+1191.833033295" watchObservedRunningTime="2026-02-19 15:29:22.402377301 +0000 UTC m=+1191.884407425" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.268718 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.361055 4810 generic.go:334] "Generic (PLEG): container finished" podID="9c2f952c-2122-43d9-b006-6967fd2b9029" containerID="aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d" exitCode=0 Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.361084 4810 generic.go:334] "Generic (PLEG): container finished" podID="9c2f952c-2122-43d9-b006-6967fd2b9029" containerID="ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93" exitCode=143 Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.361150 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c2f952c-2122-43d9-b006-6967fd2b9029","Type":"ContainerDied","Data":"aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d"} Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.361177 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c2f952c-2122-43d9-b006-6967fd2b9029","Type":"ContainerDied","Data":"ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93"} Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.361186 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c2f952c-2122-43d9-b006-6967fd2b9029","Type":"ContainerDied","Data":"84ac3613a017c9c6f49ab1cf6c99c11f9ee32460de5f526447f4c9de546b4b3b"} Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.361201 4810 scope.go:117] "RemoveContainer" containerID="aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.361414 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.368840 4810 generic.go:334] "Generic (PLEG): container finished" podID="7a0789f1-099a-4f95-9626-a5ad7da804bc" containerID="1bdbd9fc9e86820ed204a84ba415c42f653c518a62ccc95617b0bc659f34c23c" exitCode=0 Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.369094 4810 generic.go:334] "Generic (PLEG): container finished" podID="7a0789f1-099a-4f95-9626-a5ad7da804bc" containerID="039aba0ed63f1dd6f66e34144e1de83684475b75135af6fd57bb70ce97ffc6b0" exitCode=143 Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.369404 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a0789f1-099a-4f95-9626-a5ad7da804bc","Type":"ContainerDied","Data":"1bdbd9fc9e86820ed204a84ba415c42f653c518a62ccc95617b0bc659f34c23c"} Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.369450 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a0789f1-099a-4f95-9626-a5ad7da804bc","Type":"ContainerDied","Data":"039aba0ed63f1dd6f66e34144e1de83684475b75135af6fd57bb70ce97ffc6b0"} Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.416826 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-scripts\") pod \"9c2f952c-2122-43d9-b006-6967fd2b9029\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.417058 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c2f952c-2122-43d9-b006-6967fd2b9029-httpd-run\") pod \"9c2f952c-2122-43d9-b006-6967fd2b9029\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.417141 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c2f952c-2122-43d9-b006-6967fd2b9029-logs\") pod \"9c2f952c-2122-43d9-b006-6967fd2b9029\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.417253 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2b86\" (UniqueName: \"kubernetes.io/projected/9c2f952c-2122-43d9-b006-6967fd2b9029-kube-api-access-n2b86\") pod \"9c2f952c-2122-43d9-b006-6967fd2b9029\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.417354 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-combined-ca-bundle\") pod \"9c2f952c-2122-43d9-b006-6967fd2b9029\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.417437 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-internal-tls-certs\") pod \"9c2f952c-2122-43d9-b006-6967fd2b9029\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.417613 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-config-data\") pod \"9c2f952c-2122-43d9-b006-6967fd2b9029\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.417691 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"9c2f952c-2122-43d9-b006-6967fd2b9029\" (UID: \"9c2f952c-2122-43d9-b006-6967fd2b9029\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.419693 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c2f952c-2122-43d9-b006-6967fd2b9029-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9c2f952c-2122-43d9-b006-6967fd2b9029" (UID: "9c2f952c-2122-43d9-b006-6967fd2b9029"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.419775 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c2f952c-2122-43d9-b006-6967fd2b9029-logs" (OuterVolumeSpecName: "logs") pod "9c2f952c-2122-43d9-b006-6967fd2b9029" (UID: "9c2f952c-2122-43d9-b006-6967fd2b9029"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.422827 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "9c2f952c-2122-43d9-b006-6967fd2b9029" (UID: "9c2f952c-2122-43d9-b006-6967fd2b9029"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.431592 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-scripts" (OuterVolumeSpecName: "scripts") pod "9c2f952c-2122-43d9-b006-6967fd2b9029" (UID: "9c2f952c-2122-43d9-b006-6967fd2b9029"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.447386 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c2f952c-2122-43d9-b006-6967fd2b9029-kube-api-access-n2b86" (OuterVolumeSpecName: "kube-api-access-n2b86") pod "9c2f952c-2122-43d9-b006-6967fd2b9029" (UID: "9c2f952c-2122-43d9-b006-6967fd2b9029"). InnerVolumeSpecName "kube-api-access-n2b86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.485769 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-config-data" (OuterVolumeSpecName: "config-data") pod "9c2f952c-2122-43d9-b006-6967fd2b9029" (UID: "9c2f952c-2122-43d9-b006-6967fd2b9029"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.495485 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9c2f952c-2122-43d9-b006-6967fd2b9029" (UID: "9c2f952c-2122-43d9-b006-6967fd2b9029"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.506315 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c2f952c-2122-43d9-b006-6967fd2b9029" (UID: "9c2f952c-2122-43d9-b006-6967fd2b9029"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.520384 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2b86\" (UniqueName: \"kubernetes.io/projected/9c2f952c-2122-43d9-b006-6967fd2b9029-kube-api-access-n2b86\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.522993 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.523088 4810 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.523158 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.523236 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.523298 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c2f952c-2122-43d9-b006-6967fd2b9029-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.523381 4810 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c2f952c-2122-43d9-b006-6967fd2b9029-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.523467 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c2f952c-2122-43d9-b006-6967fd2b9029-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.546631 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.575710 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.634566 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.639727 4810 scope.go:117] "RemoveContainer" containerID="ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.702459 4810 scope.go:117] "RemoveContainer" containerID="aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d" Feb 19 15:29:23 crc kubenswrapper[4810]: E0219 15:29:23.707095 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d\": container with ID starting with aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d not found: ID does not exist" containerID="aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.707296 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d"} err="failed to get container status \"aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d\": rpc error: code = NotFound desc = could not find container \"aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d\": container with ID starting with aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d not found: ID does not exist" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.707395 4810 scope.go:117] "RemoveContainer" containerID="ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.710433 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:29:23 crc kubenswrapper[4810]: E0219 15:29:23.711280 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93\": container with ID starting with ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93 not found: ID does not exist" containerID="ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.711420 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93"} err="failed to get container status \"ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93\": rpc error: code = NotFound desc = could not find container \"ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93\": container with ID starting with ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93 not found: ID does not exist" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.711508 4810 scope.go:117] "RemoveContainer" containerID="aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.713973 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d"} err="failed to get container status \"aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d\": rpc error: code = NotFound desc = could not find container \"aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d\": container with ID starting with aacb41ee75296b30fe484f99ea7be7d89eedb8e0ad8d20c614e61233ead7a68d not found: ID does not exist" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.714139 4810 scope.go:117] "RemoveContainer" containerID="ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.717085 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93"} err="failed to get container status \"ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93\": rpc error: code = NotFound desc = could not find container \"ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93\": container with ID starting with ba83a3830aa23a7f50a84bf64a23008ce3bd4119d839094ddf609dc429db2d93 not found: ID does not exist" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.717349 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.731586 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:29:23 crc kubenswrapper[4810]: E0219 15:29:23.734717 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2f952c-2122-43d9-b006-6967fd2b9029" containerName="glance-httpd" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.734749 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2f952c-2122-43d9-b006-6967fd2b9029" containerName="glance-httpd" Feb 19 15:29:23 crc kubenswrapper[4810]: E0219 15:29:23.734771 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0789f1-099a-4f95-9626-a5ad7da804bc" containerName="glance-httpd" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.734777 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0789f1-099a-4f95-9626-a5ad7da804bc" containerName="glance-httpd" Feb 19 15:29:23 crc kubenswrapper[4810]: E0219 15:29:23.734789 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2f952c-2122-43d9-b006-6967fd2b9029" containerName="glance-log" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.734795 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2f952c-2122-43d9-b006-6967fd2b9029" containerName="glance-log" Feb 19 15:29:23 crc kubenswrapper[4810]: E0219 15:29:23.734818 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0789f1-099a-4f95-9626-a5ad7da804bc" containerName="glance-log" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.734836 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0789f1-099a-4f95-9626-a5ad7da804bc" containerName="glance-log" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.735048 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0789f1-099a-4f95-9626-a5ad7da804bc" containerName="glance-httpd" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.735069 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c2f952c-2122-43d9-b006-6967fd2b9029" containerName="glance-httpd" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.735077 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0789f1-099a-4f95-9626-a5ad7da804bc" containerName="glance-log" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.735094 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c2f952c-2122-43d9-b006-6967fd2b9029" containerName="glance-log" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.735730 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a0789f1-099a-4f95-9626-a5ad7da804bc-httpd-run\") pod \"7a0789f1-099a-4f95-9626-a5ad7da804bc\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.735825 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-public-tls-certs\") pod \"7a0789f1-099a-4f95-9626-a5ad7da804bc\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.735905 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-scripts\") pod \"7a0789f1-099a-4f95-9626-a5ad7da804bc\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.736022 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-config-data\") pod \"7a0789f1-099a-4f95-9626-a5ad7da804bc\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.736139 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a0789f1-099a-4f95-9626-a5ad7da804bc-logs\") pod \"7a0789f1-099a-4f95-9626-a5ad7da804bc\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.736294 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-combined-ca-bundle\") pod \"7a0789f1-099a-4f95-9626-a5ad7da804bc\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.736390 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"7a0789f1-099a-4f95-9626-a5ad7da804bc\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.736456 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f8d7\" (UniqueName: \"kubernetes.io/projected/7a0789f1-099a-4f95-9626-a5ad7da804bc-kube-api-access-8f8d7\") pod \"7a0789f1-099a-4f95-9626-a5ad7da804bc\" (UID: \"7a0789f1-099a-4f95-9626-a5ad7da804bc\") " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.736679 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.738513 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.740850 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.741051 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.741563 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a0789f1-099a-4f95-9626-a5ad7da804bc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7a0789f1-099a-4f95-9626-a5ad7da804bc" (UID: "7a0789f1-099a-4f95-9626-a5ad7da804bc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.741819 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a0789f1-099a-4f95-9626-a5ad7da804bc-logs" (OuterVolumeSpecName: "logs") pod "7a0789f1-099a-4f95-9626-a5ad7da804bc" (UID: "7a0789f1-099a-4f95-9626-a5ad7da804bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.756037 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "7a0789f1-099a-4f95-9626-a5ad7da804bc" (UID: "7a0789f1-099a-4f95-9626-a5ad7da804bc"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.770646 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-scripts" (OuterVolumeSpecName: "scripts") pod "7a0789f1-099a-4f95-9626-a5ad7da804bc" (UID: "7a0789f1-099a-4f95-9626-a5ad7da804bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.773518 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0789f1-099a-4f95-9626-a5ad7da804bc-kube-api-access-8f8d7" (OuterVolumeSpecName: "kube-api-access-8f8d7") pod "7a0789f1-099a-4f95-9626-a5ad7da804bc" (UID: "7a0789f1-099a-4f95-9626-a5ad7da804bc"). InnerVolumeSpecName "kube-api-access-8f8d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.806669 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a0789f1-099a-4f95-9626-a5ad7da804bc" (UID: "7a0789f1-099a-4f95-9626-a5ad7da804bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.816576 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7a0789f1-099a-4f95-9626-a5ad7da804bc" (UID: "7a0789f1-099a-4f95-9626-a5ad7da804bc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838187 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838267 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-logs\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838285 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838306 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838376 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838448 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgrng\" (UniqueName: \"kubernetes.io/projected/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-kube-api-access-tgrng\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838504 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838528 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838605 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a0789f1-099a-4f95-9626-a5ad7da804bc-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838618 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838656 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838666 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f8d7\" (UniqueName: \"kubernetes.io/projected/7a0789f1-099a-4f95-9626-a5ad7da804bc-kube-api-access-8f8d7\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838676 4810 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a0789f1-099a-4f95-9626-a5ad7da804bc-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838687 4810 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.838695 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.869837 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.911322 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-config-data" (OuterVolumeSpecName: "config-data") pod "7a0789f1-099a-4f95-9626-a5ad7da804bc" (UID: "7a0789f1-099a-4f95-9626-a5ad7da804bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.939803 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.939888 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-logs\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.939918 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.939943 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.940027 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.940125 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgrng\" (UniqueName: \"kubernetes.io/projected/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-kube-api-access-tgrng\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.940194 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.940231 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.940298 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a0789f1-099a-4f95-9626-a5ad7da804bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.940321 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.940932 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.941412 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.941466 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-logs\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.959520 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.960461 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.961014 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.963143 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.971608 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgrng\" (UniqueName: \"kubernetes.io/projected/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-kube-api-access-tgrng\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:23 crc kubenswrapper[4810]: I0219 15:29:23.981531 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.078202 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.089961 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.404443 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a0789f1-099a-4f95-9626-a5ad7da804bc","Type":"ContainerDied","Data":"123c419edcd4407399df0f1f1452e33c4f00d373cd040db3a818757999826cf9"} Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.404728 4810 scope.go:117] "RemoveContainer" containerID="1bdbd9fc9e86820ed204a84ba415c42f653c518a62ccc95617b0bc659f34c23c" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.404833 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.472379 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.490614 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.498895 4810 scope.go:117] "RemoveContainer" containerID="039aba0ed63f1dd6f66e34144e1de83684475b75135af6fd57bb70ce97ffc6b0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.509446 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.510898 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.517671 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.517857 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.534144 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.660646 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-scripts\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.661006 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h47wl\" (UniqueName: \"kubernetes.io/projected/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-kube-api-access-h47wl\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.661075 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.661144 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-logs\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.661171 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.661191 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-config-data\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.661227 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.661245 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.708858 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.762886 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.762936 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-config-data\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.762978 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.762997 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.763085 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-scripts\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.763113 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h47wl\" (UniqueName: \"kubernetes.io/projected/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-kube-api-access-h47wl\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.763182 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.763207 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-logs\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.763647 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-logs\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.764082 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.775494 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-config-data\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.781614 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.785948 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.786223 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-scripts\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.797987 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.816856 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h47wl\" (UniqueName: \"kubernetes.io/projected/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-kube-api-access-h47wl\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.828671 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " pod="openstack/glance-default-external-api-0" Feb 19 15:29:24 crc kubenswrapper[4810]: I0219 15:29:24.848842 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 15:29:25 crc kubenswrapper[4810]: I0219 15:29:25.420571 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"addf00fe-9b9b-41d4-bd81-4e5f2c339fff","Type":"ContainerStarted","Data":"58cd5599b992d54d64de99ed5546382a6f34cf94866c9af8e9254502abddbf03"} Feb 19 15:29:25 crc kubenswrapper[4810]: I0219 15:29:25.423806 4810 generic.go:334] "Generic (PLEG): container finished" podID="4dd5dede-cf58-43c7-954e-b9b1d33ad8d1" containerID="da258067ce2c7912909dc7c937b6ad45df02bf5c8504937ad6d6f0ea0359724a" exitCode=0 Feb 19 15:29:25 crc kubenswrapper[4810]: I0219 15:29:25.423879 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j989d" event={"ID":"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1","Type":"ContainerDied","Data":"da258067ce2c7912909dc7c937b6ad45df02bf5c8504937ad6d6f0ea0359724a"} Feb 19 15:29:25 crc kubenswrapper[4810]: I0219 15:29:25.468905 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a0789f1-099a-4f95-9626-a5ad7da804bc" path="/var/lib/kubelet/pods/7a0789f1-099a-4f95-9626-a5ad7da804bc/volumes" Feb 19 15:29:25 crc kubenswrapper[4810]: I0219 15:29:25.470042 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c2f952c-2122-43d9-b006-6967fd2b9029" path="/var/lib/kubelet/pods/9c2f952c-2122-43d9-b006-6967fd2b9029/volumes" Feb 19 15:29:25 crc kubenswrapper[4810]: I0219 15:29:25.470836 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 15:29:25 crc kubenswrapper[4810]: I0219 15:29:25.470933 4810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 15:29:25 crc kubenswrapper[4810]: I0219 15:29:25.553533 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:29:26 crc kubenswrapper[4810]: I0219 15:29:26.453094 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"addf00fe-9b9b-41d4-bd81-4e5f2c339fff","Type":"ContainerStarted","Data":"1f3fdcf1ede0870a63fe45639c44983223bdd32c1b88c14ab338ab8b9a3f039c"} Feb 19 15:29:26 crc kubenswrapper[4810]: I0219 15:29:26.468634 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c","Type":"ContainerStarted","Data":"a5b582a70e74643764c5c634d13034b32fb543e1cf8d7471fc29e46f970a5375"} Feb 19 15:29:26 crc kubenswrapper[4810]: I0219 15:29:26.468681 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c","Type":"ContainerStarted","Data":"8e2ec136ee1702cfd45683995f2deb05321488c4c561ae75b5ecc3c327d09b7a"} Feb 19 15:29:27 crc kubenswrapper[4810]: I0219 15:29:27.214384 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 15:29:27 crc kubenswrapper[4810]: I0219 15:29:27.483912 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"addf00fe-9b9b-41d4-bd81-4e5f2c339fff","Type":"ContainerStarted","Data":"598ccff405c1e1906ae06df4732ddadd78333232ea68a893ba2db6513c96c3dd"} Feb 19 15:29:27 crc kubenswrapper[4810]: I0219 15:29:27.497789 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c","Type":"ContainerStarted","Data":"d4f8797a868301d8a063b9ee6d65b2b3734521a53ee7809ebafe72723ce9d630"} Feb 19 15:29:27 crc kubenswrapper[4810]: I0219 15:29:27.535618 4810 generic.go:334] "Generic (PLEG): container finished" podID="36fe6fdb-2970-4773-8184-a2d16b8ca89a" containerID="829a51aca23df8d8763078bcae4b4cba43b6c265996ab11fc55f6d42ce950516" exitCode=0 Feb 19 15:29:27 crc kubenswrapper[4810]: I0219 15:29:27.535672 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7jdcp" event={"ID":"36fe6fdb-2970-4773-8184-a2d16b8ca89a","Type":"ContainerDied","Data":"829a51aca23df8d8763078bcae4b4cba43b6c265996ab11fc55f6d42ce950516"} Feb 19 15:29:27 crc kubenswrapper[4810]: I0219 15:29:27.536910 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.536888191 podStartE2EDuration="4.536888191s" podCreationTimestamp="2026-02-19 15:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:27.505218554 +0000 UTC m=+1196.987248678" watchObservedRunningTime="2026-02-19 15:29:27.536888191 +0000 UTC m=+1197.018918315" Feb 19 15:29:27 crc kubenswrapper[4810]: I0219 15:29:27.583243 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.583221818 podStartE2EDuration="3.583221818s" podCreationTimestamp="2026-02-19 15:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:27.550675159 +0000 UTC m=+1197.032705283" watchObservedRunningTime="2026-02-19 15:29:27.583221818 +0000 UTC m=+1197.065251942" Feb 19 15:29:28 crc kubenswrapper[4810]: I0219 15:29:28.162404 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:29:28 crc kubenswrapper[4810]: I0219 15:29:28.162465 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:29:28 crc kubenswrapper[4810]: I0219 15:29:28.273172 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:29:28 crc kubenswrapper[4810]: I0219 15:29:28.273232 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.078415 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.112273 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.125709 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 15:29:29 crc kubenswrapper[4810]: E0219 15:29:29.126403 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f9469eea900aae9e94121b7851a3eacdafc431e0f15d0e8319f5a93a2616b851 is running failed: container process not found" containerID="f9469eea900aae9e94121b7851a3eacdafc431e0f15d0e8319f5a93a2616b851" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 19 15:29:29 crc kubenswrapper[4810]: E0219 15:29:29.127699 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f9469eea900aae9e94121b7851a3eacdafc431e0f15d0e8319f5a93a2616b851 is running failed: container process not found" containerID="f9469eea900aae9e94121b7851a3eacdafc431e0f15d0e8319f5a93a2616b851" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 19 15:29:29 crc kubenswrapper[4810]: E0219 15:29:29.128021 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f9469eea900aae9e94121b7851a3eacdafc431e0f15d0e8319f5a93a2616b851 is running failed: container process not found" containerID="f9469eea900aae9e94121b7851a3eacdafc431e0f15d0e8319f5a93a2616b851" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 19 15:29:29 crc kubenswrapper[4810]: E0219 15:29:29.128056 4810 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f9469eea900aae9e94121b7851a3eacdafc431e0f15d0e8319f5a93a2616b851 is running failed: container process not found" probeType="Startup" pod="openstack/watcher-decision-engine-0" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.237251 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.557379 4810 generic.go:334] "Generic (PLEG): container finished" podID="92797675-ddf7-43cf-90af-0248cf097509" containerID="aa561f23770b052d6b320e47499c0a8789e25a7a2367b69634f88f903c8d780a" exitCode=0 Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.557476 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-thnc7" event={"ID":"92797675-ddf7-43cf-90af-0248cf097509","Type":"ContainerDied","Data":"aa561f23770b052d6b320e47499c0a8789e25a7a2367b69634f88f903c8d780a"} Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.559107 4810 generic.go:334] "Generic (PLEG): container finished" podID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerID="f9469eea900aae9e94121b7851a3eacdafc431e0f15d0e8319f5a93a2616b851" exitCode=1 Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.559185 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5","Type":"ContainerDied","Data":"f9469eea900aae9e94121b7851a3eacdafc431e0f15d0e8319f5a93a2616b851"} Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.559939 4810 scope.go:117] "RemoveContainer" containerID="f9469eea900aae9e94121b7851a3eacdafc431e0f15d0e8319f5a93a2616b851" Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.587848 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.612513 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.615380 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.664517 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6999bddfcf-fzf7g"] Feb 19 15:29:29 crc kubenswrapper[4810]: I0219 15:29:29.665114 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" podUID="2584fed3-16a4-489e-bf03-1c7461e9d3d8" containerName="dnsmasq-dns" containerID="cri-o://66285c6ba52e055f610f9273f522d8c7987aa81e9fe271b980a52b4e8dbd8794" gracePeriod=10 Feb 19 15:29:30 crc kubenswrapper[4810]: I0219 15:29:30.450196 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 19 15:29:30 crc kubenswrapper[4810]: I0219 15:29:30.455379 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 19 15:29:30 crc kubenswrapper[4810]: I0219 15:29:30.570317 4810 generic.go:334] "Generic (PLEG): container finished" podID="2584fed3-16a4-489e-bf03-1c7461e9d3d8" containerID="66285c6ba52e055f610f9273f522d8c7987aa81e9fe271b980a52b4e8dbd8794" exitCode=0 Feb 19 15:29:30 crc kubenswrapper[4810]: I0219 15:29:30.570465 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" event={"ID":"2584fed3-16a4-489e-bf03-1c7461e9d3d8","Type":"ContainerDied","Data":"66285c6ba52e055f610f9273f522d8c7987aa81e9fe271b980a52b4e8dbd8794"} Feb 19 15:29:30 crc kubenswrapper[4810]: I0219 15:29:30.575309 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 15:29:31 crc kubenswrapper[4810]: I0219 15:29:31.578132 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="4be760b2-263c-4b89-8bdf-ecf98114a24f" containerName="watcher-applier" containerID="cri-o://8c99f71d93423dbb260d03502181a976def39a613d97465a06298513d67bb0bf" gracePeriod=30 Feb 19 15:29:31 crc kubenswrapper[4810]: I0219 15:29:31.722567 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.142685 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j989d" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.162801 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.177728 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7jdcp" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.240230 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sllpm\" (UniqueName: \"kubernetes.io/projected/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-kube-api-access-sllpm\") pod \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\" (UID: \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.240537 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-combined-ca-bundle\") pod \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\" (UID: \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.240593 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-credential-keys\") pod \"92797675-ddf7-43cf-90af-0248cf097509\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.240697 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-combined-ca-bundle\") pod \"92797675-ddf7-43cf-90af-0248cf097509\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.240717 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-config\") pod \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\" (UID: \"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.240787 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-scripts\") pod \"92797675-ddf7-43cf-90af-0248cf097509\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.240807 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-config-data\") pod \"92797675-ddf7-43cf-90af-0248cf097509\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.240843 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxtzf\" (UniqueName: \"kubernetes.io/projected/92797675-ddf7-43cf-90af-0248cf097509-kube-api-access-gxtzf\") pod \"92797675-ddf7-43cf-90af-0248cf097509\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.240873 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-fernet-keys\") pod \"92797675-ddf7-43cf-90af-0248cf097509\" (UID: \"92797675-ddf7-43cf-90af-0248cf097509\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.244497 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "92797675-ddf7-43cf-90af-0248cf097509" (UID: "92797675-ddf7-43cf-90af-0248cf097509"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.252089 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-kube-api-access-sllpm" (OuterVolumeSpecName: "kube-api-access-sllpm") pod "4dd5dede-cf58-43c7-954e-b9b1d33ad8d1" (UID: "4dd5dede-cf58-43c7-954e-b9b1d33ad8d1"). InnerVolumeSpecName "kube-api-access-sllpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.252547 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92797675-ddf7-43cf-90af-0248cf097509-kube-api-access-gxtzf" (OuterVolumeSpecName: "kube-api-access-gxtzf") pod "92797675-ddf7-43cf-90af-0248cf097509" (UID: "92797675-ddf7-43cf-90af-0248cf097509"). InnerVolumeSpecName "kube-api-access-gxtzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.259377 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-scripts" (OuterVolumeSpecName: "scripts") pod "92797675-ddf7-43cf-90af-0248cf097509" (UID: "92797675-ddf7-43cf-90af-0248cf097509"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.265020 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "92797675-ddf7-43cf-90af-0248cf097509" (UID: "92797675-ddf7-43cf-90af-0248cf097509"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.278429 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92797675-ddf7-43cf-90af-0248cf097509" (UID: "92797675-ddf7-43cf-90af-0248cf097509"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.297064 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-config" (OuterVolumeSpecName: "config") pod "4dd5dede-cf58-43c7-954e-b9b1d33ad8d1" (UID: "4dd5dede-cf58-43c7-954e-b9b1d33ad8d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.299135 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dd5dede-cf58-43c7-954e-b9b1d33ad8d1" (UID: "4dd5dede-cf58-43c7-954e-b9b1d33ad8d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.301566 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-config-data" (OuterVolumeSpecName: "config-data") pod "92797675-ddf7-43cf-90af-0248cf097509" (UID: "92797675-ddf7-43cf-90af-0248cf097509"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.342812 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-config-data\") pod \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.343097 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36fe6fdb-2970-4773-8184-a2d16b8ca89a-logs\") pod \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.343468 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36fe6fdb-2970-4773-8184-a2d16b8ca89a-logs" (OuterVolumeSpecName: "logs") pod "36fe6fdb-2970-4773-8184-a2d16b8ca89a" (UID: "36fe6fdb-2970-4773-8184-a2d16b8ca89a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.343698 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-combined-ca-bundle\") pod \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.343878 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-scripts\") pod \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.344248 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qksxk\" (UniqueName: \"kubernetes.io/projected/36fe6fdb-2970-4773-8184-a2d16b8ca89a-kube-api-access-qksxk\") pod \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\" (UID: \"36fe6fdb-2970-4773-8184-a2d16b8ca89a\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.344944 4810 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.345040 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.345224 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.345295 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.345522 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.345621 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36fe6fdb-2970-4773-8184-a2d16b8ca89a-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.345786 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxtzf\" (UniqueName: \"kubernetes.io/projected/92797675-ddf7-43cf-90af-0248cf097509-kube-api-access-gxtzf\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.345860 4810 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/92797675-ddf7-43cf-90af-0248cf097509-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.345913 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sllpm\" (UniqueName: \"kubernetes.io/projected/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-kube-api-access-sllpm\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.345987 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.350150 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-scripts" (OuterVolumeSpecName: "scripts") pod "36fe6fdb-2970-4773-8184-a2d16b8ca89a" (UID: "36fe6fdb-2970-4773-8184-a2d16b8ca89a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.350761 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36fe6fdb-2970-4773-8184-a2d16b8ca89a-kube-api-access-qksxk" (OuterVolumeSpecName: "kube-api-access-qksxk") pod "36fe6fdb-2970-4773-8184-a2d16b8ca89a" (UID: "36fe6fdb-2970-4773-8184-a2d16b8ca89a"). InnerVolumeSpecName "kube-api-access-qksxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.396697 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.452383 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qksxk\" (UniqueName: \"kubernetes.io/projected/36fe6fdb-2970-4773-8184-a2d16b8ca89a-kube-api-access-qksxk\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.452412 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.511198 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-config-data" (OuterVolumeSpecName: "config-data") pod "36fe6fdb-2970-4773-8184-a2d16b8ca89a" (UID: "36fe6fdb-2970-4773-8184-a2d16b8ca89a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.527817 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36fe6fdb-2970-4773-8184-a2d16b8ca89a" (UID: "36fe6fdb-2970-4773-8184-a2d16b8ca89a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.553763 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-ovsdbserver-nb\") pod \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.553811 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-ovsdbserver-sb\") pod \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.553849 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckj5k\" (UniqueName: \"kubernetes.io/projected/2584fed3-16a4-489e-bf03-1c7461e9d3d8-kube-api-access-ckj5k\") pod \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.553962 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-dns-svc\") pod \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.554098 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-config\") pod \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.554155 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-dns-swift-storage-0\") pod \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\" (UID: \"2584fed3-16a4-489e-bf03-1c7461e9d3d8\") " Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.554683 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.554711 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36fe6fdb-2970-4773-8184-a2d16b8ca89a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.560672 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2584fed3-16a4-489e-bf03-1c7461e9d3d8-kube-api-access-ckj5k" (OuterVolumeSpecName: "kube-api-access-ckj5k") pod "2584fed3-16a4-489e-bf03-1c7461e9d3d8" (UID: "2584fed3-16a4-489e-bf03-1c7461e9d3d8"). InnerVolumeSpecName "kube-api-access-ckj5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.594244 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8cd44d0-7395-44e1-9112-9e8bb4198b93","Type":"ContainerStarted","Data":"1b6e83705ca6e6c238d2cc26ae7440d08d4c7c41779dea1e88d4da0c7c6c4ca7"} Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.597134 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j989d" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.600506 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j989d" event={"ID":"4dd5dede-cf58-43c7-954e-b9b1d33ad8d1","Type":"ContainerDied","Data":"af9963a31801a20460aedc2d93f7da81f8b9a7c2e7ec298ae21e257191169331"} Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.600545 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af9963a31801a20460aedc2d93f7da81f8b9a7c2e7ec298ae21e257191169331" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.603113 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7jdcp" event={"ID":"36fe6fdb-2970-4773-8184-a2d16b8ca89a","Type":"ContainerDied","Data":"63add7d4ad95ec66116fce5bf4ebf1368a0ffff3203d7f02a7456da5980c8ad7"} Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.603168 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63add7d4ad95ec66116fce5bf4ebf1368a0ffff3203d7f02a7456da5980c8ad7" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.603254 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7jdcp" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.606914 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2584fed3-16a4-489e-bf03-1c7461e9d3d8" (UID: "2584fed3-16a4-489e-bf03-1c7461e9d3d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.607116 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2584fed3-16a4-489e-bf03-1c7461e9d3d8" (UID: "2584fed3-16a4-489e-bf03-1c7461e9d3d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.609552 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-thnc7" event={"ID":"92797675-ddf7-43cf-90af-0248cf097509","Type":"ContainerDied","Data":"29912d3ef9b6f31b1e73feb89f2d45be94dbb1cafdcd248d477030898edd801f"} Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.609585 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29912d3ef9b6f31b1e73feb89f2d45be94dbb1cafdcd248d477030898edd801f" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.609659 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-thnc7" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.612802 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2584fed3-16a4-489e-bf03-1c7461e9d3d8" (UID: "2584fed3-16a4-489e-bf03-1c7461e9d3d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.617808 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2584fed3-16a4-489e-bf03-1c7461e9d3d8" (UID: "2584fed3-16a4-489e-bf03-1c7461e9d3d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.618573 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" event={"ID":"2584fed3-16a4-489e-bf03-1c7461e9d3d8","Type":"ContainerDied","Data":"31b96436742b6e2f2b15881309695bfa289fff7a6145248f7da06333643cac1d"} Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.618614 4810 scope.go:117] "RemoveContainer" containerID="66285c6ba52e055f610f9273f522d8c7987aa81e9fe271b980a52b4e8dbd8794" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.618745 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6999bddfcf-fzf7g" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.625989 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5","Type":"ContainerStarted","Data":"ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631"} Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.640085 4810 scope.go:117] "RemoveContainer" containerID="821ae759a5ba32197a93051b435e6f01263ceab4bc6d3d77eccce527b39b8143" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.644847 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-config" (OuterVolumeSpecName: "config") pod "2584fed3-16a4-489e-bf03-1c7461e9d3d8" (UID: "2584fed3-16a4-489e-bf03-1c7461e9d3d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.656154 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.656189 4810 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.656202 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.656214 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.656226 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckj5k\" (UniqueName: \"kubernetes.io/projected/2584fed3-16a4-489e-bf03-1c7461e9d3d8-kube-api-access-ckj5k\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.656239 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2584fed3-16a4-489e-bf03-1c7461e9d3d8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.949363 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6999bddfcf-fzf7g"] Feb 19 15:29:32 crc kubenswrapper[4810]: I0219 15:29:32.955949 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6999bddfcf-fzf7g"] Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.355847 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6cd8bf58f4-ktsjk"] Feb 19 15:29:33 crc kubenswrapper[4810]: E0219 15:29:33.356563 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2584fed3-16a4-489e-bf03-1c7461e9d3d8" containerName="dnsmasq-dns" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.356583 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2584fed3-16a4-489e-bf03-1c7461e9d3d8" containerName="dnsmasq-dns" Feb 19 15:29:33 crc kubenswrapper[4810]: E0219 15:29:33.356597 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92797675-ddf7-43cf-90af-0248cf097509" containerName="keystone-bootstrap" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.356605 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="92797675-ddf7-43cf-90af-0248cf097509" containerName="keystone-bootstrap" Feb 19 15:29:33 crc kubenswrapper[4810]: E0219 15:29:33.356631 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2584fed3-16a4-489e-bf03-1c7461e9d3d8" containerName="init" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.356639 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2584fed3-16a4-489e-bf03-1c7461e9d3d8" containerName="init" Feb 19 15:29:33 crc kubenswrapper[4810]: E0219 15:29:33.356654 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd5dede-cf58-43c7-954e-b9b1d33ad8d1" containerName="neutron-db-sync" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.356661 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd5dede-cf58-43c7-954e-b9b1d33ad8d1" containerName="neutron-db-sync" Feb 19 15:29:33 crc kubenswrapper[4810]: E0219 15:29:33.356677 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36fe6fdb-2970-4773-8184-a2d16b8ca89a" containerName="placement-db-sync" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.356685 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="36fe6fdb-2970-4773-8184-a2d16b8ca89a" containerName="placement-db-sync" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.356891 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2584fed3-16a4-489e-bf03-1c7461e9d3d8" containerName="dnsmasq-dns" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.356918 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="92797675-ddf7-43cf-90af-0248cf097509" containerName="keystone-bootstrap" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.356932 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="36fe6fdb-2970-4773-8184-a2d16b8ca89a" containerName="placement-db-sync" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.356944 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd5dede-cf58-43c7-954e-b9b1d33ad8d1" containerName="neutron-db-sync" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.357609 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.359679 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.359825 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j78zz" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.359962 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.361587 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.361779 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.369894 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-68ff886dc8-nntj6"] Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.371600 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.372529 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.377089 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.377273 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.377774 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-qpnp7" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.378048 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.378277 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.401645 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cd8bf58f4-ktsjk"] Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.436376 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68ff886dc8-nntj6"] Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.463444 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2584fed3-16a4-489e-bf03-1c7461e9d3d8" path="/var/lib/kubelet/pods/2584fed3-16a4-489e-bf03-1c7461e9d3d8/volumes" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479285 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-public-tls-certs\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479352 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjl2x\" (UniqueName: \"kubernetes.io/projected/95165d88-ea72-4785-8c1a-eea4d54466fb-kube-api-access-qjl2x\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479402 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-combined-ca-bundle\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479458 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-internal-tls-certs\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479519 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-internal-tls-certs\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479545 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-config-data\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479574 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-public-tls-certs\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479599 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln9c7\" (UniqueName: \"kubernetes.io/projected/e0116ca5-826e-4a77-bc6f-11e89c047af8-kube-api-access-ln9c7\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479615 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-config-data\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479645 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-fernet-keys\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479661 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-credential-keys\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479702 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0116ca5-826e-4a77-bc6f-11e89c047af8-logs\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479726 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-combined-ca-bundle\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479743 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-scripts\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.479804 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-scripts\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.510397 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cf5f86dff-7482l"] Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.511916 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.518540 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-695bb7cdc6-72zs2"] Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.519981 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.522639 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.522875 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.522989 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.523126 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xcpvh" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.535268 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf5f86dff-7482l"] Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.557804 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-695bb7cdc6-72zs2"] Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.586989 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-combined-ca-bundle\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.587207 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-scripts\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.587285 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-config\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.587400 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drqw4\" (UniqueName: \"kubernetes.io/projected/f0b73197-3c7e-44c3-8a49-35d9e0a40629-kube-api-access-drqw4\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.587494 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-combined-ca-bundle\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.587597 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-scripts\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.587673 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-public-tls-certs\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.587736 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjl2x\" (UniqueName: \"kubernetes.io/projected/95165d88-ea72-4785-8c1a-eea4d54466fb-kube-api-access-qjl2x\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.587803 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhf9w\" (UniqueName: \"kubernetes.io/projected/4f9534ee-827a-49fb-8588-a5a8a494be3c-kube-api-access-vhf9w\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.587873 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.587940 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-httpd-config\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.588015 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-combined-ca-bundle\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.588102 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-internal-tls-certs\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.588224 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.588341 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-internal-tls-certs\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.588444 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-config-data\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.588542 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-dns-svc\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.588641 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-public-tls-certs\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.588744 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln9c7\" (UniqueName: \"kubernetes.io/projected/e0116ca5-826e-4a77-bc6f-11e89c047af8-kube-api-access-ln9c7\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.588839 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-config-data\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.588960 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-config\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.589054 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-ovndb-tls-certs\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.589158 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-fernet-keys\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.589242 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-credential-keys\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.589382 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.596769 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0116ca5-826e-4a77-bc6f-11e89c047af8-logs\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.597434 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0116ca5-826e-4a77-bc6f-11e89c047af8-logs\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.602972 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-combined-ca-bundle\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.604132 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-scripts\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.605757 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-config-data\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.606318 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-internal-tls-certs\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.606818 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-combined-ca-bundle\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.611602 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-public-tls-certs\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.614712 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-scripts\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.615029 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-internal-tls-certs\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.615149 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-fernet-keys\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.615593 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-public-tls-certs\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.615898 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/95165d88-ea72-4785-8c1a-eea4d54466fb-credential-keys\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.617924 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-config-data\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.628894 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln9c7\" (UniqueName: \"kubernetes.io/projected/e0116ca5-826e-4a77-bc6f-11e89c047af8-kube-api-access-ln9c7\") pod \"placement-68ff886dc8-nntj6\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.645454 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjl2x\" (UniqueName: \"kubernetes.io/projected/95165d88-ea72-4785-8c1a-eea4d54466fb-kube-api-access-qjl2x\") pod \"keystone-6cd8bf58f4-ktsjk\" (UID: \"95165d88-ea72-4785-8c1a-eea4d54466fb\") " pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.685976 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.703304 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-config\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.703452 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-ovndb-tls-certs\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.703506 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.703542 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-config\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.703585 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drqw4\" (UniqueName: \"kubernetes.io/projected/f0b73197-3c7e-44c3-8a49-35d9e0a40629-kube-api-access-drqw4\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.703623 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-combined-ca-bundle\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.703666 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhf9w\" (UniqueName: \"kubernetes.io/projected/4f9534ee-827a-49fb-8588-a5a8a494be3c-kube-api-access-vhf9w\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.703694 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.703713 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-httpd-config\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.703766 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.703791 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-dns-svc\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.704104 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.705301 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-config\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.706416 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.706914 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.711404 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.712505 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-dns-svc\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.731136 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-combined-ca-bundle\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.731677 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-ovndb-tls-certs\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.731741 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-httpd-config\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.752142 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-config\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.761674 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhf9w\" (UniqueName: \"kubernetes.io/projected/4f9534ee-827a-49fb-8588-a5a8a494be3c-kube-api-access-vhf9w\") pod \"dnsmasq-dns-6cf5f86dff-7482l\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.761874 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drqw4\" (UniqueName: \"kubernetes.io/projected/f0b73197-3c7e-44c3-8a49-35d9e0a40629-kube-api-access-drqw4\") pod \"neutron-695bb7cdc6-72zs2\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.774179 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6cc67d5fc8-hs8lf"] Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.775835 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.816634 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cc67d5fc8-hs8lf"] Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.837239 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.852692 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.916460 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-ovndb-tls-certs\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.916517 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-httpd-config\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.916618 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-config\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.918183 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7555d68ddd-xqj8c"] Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.919798 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.928464 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk7nj\" (UniqueName: \"kubernetes.io/projected/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-kube-api-access-lk7nj\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.928824 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-combined-ca-bundle\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:33 crc kubenswrapper[4810]: I0219 15:29:33.930367 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7555d68ddd-xqj8c"] Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.031298 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/565eac29-daec-4b40-bcb7-751696560c3a-logs\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.031625 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-combined-ca-bundle\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.031664 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-scripts\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.031685 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-public-tls-certs\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.031725 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-ovndb-tls-certs\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.031742 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-httpd-config\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.031776 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-combined-ca-bundle\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.031810 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25pj2\" (UniqueName: \"kubernetes.io/projected/565eac29-daec-4b40-bcb7-751696560c3a-kube-api-access-25pj2\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.031862 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-config\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.031893 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-config-data\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.031914 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk7nj\" (UniqueName: \"kubernetes.io/projected/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-kube-api-access-lk7nj\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.031944 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-internal-tls-certs\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.036706 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-combined-ca-bundle\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.049781 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-httpd-config\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.050045 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-ovndb-tls-certs\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.056085 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-config\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.069023 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk7nj\" (UniqueName: \"kubernetes.io/projected/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-kube-api-access-lk7nj\") pod \"neutron-6cc67d5fc8-hs8lf\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.093704 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.093750 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 15:29:34 crc kubenswrapper[4810]: E0219 15:29:34.116174 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c99f71d93423dbb260d03502181a976def39a613d97465a06298513d67bb0bf" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.135532 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-config-data\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.135602 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-internal-tls-certs\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.135634 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/565eac29-daec-4b40-bcb7-751696560c3a-logs\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.135673 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-scripts\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.135695 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-public-tls-certs\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.135745 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-combined-ca-bundle\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.135776 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25pj2\" (UniqueName: \"kubernetes.io/projected/565eac29-daec-4b40-bcb7-751696560c3a-kube-api-access-25pj2\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.143801 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-combined-ca-bundle\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.144068 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/565eac29-daec-4b40-bcb7-751696560c3a-logs\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: E0219 15:29:34.160489 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c99f71d93423dbb260d03502181a976def39a613d97465a06298513d67bb0bf" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.160885 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-scripts\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.161786 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-config-data\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.165799 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-internal-tls-certs\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.193787 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/565eac29-daec-4b40-bcb7-751696560c3a-public-tls-certs\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: E0219 15:29:34.200095 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c99f71d93423dbb260d03502181a976def39a613d97465a06298513d67bb0bf" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 15:29:34 crc kubenswrapper[4810]: E0219 15:29:34.200161 4810 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="4be760b2-263c-4b89-8bdf-ecf98114a24f" containerName="watcher-applier" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.200908 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25pj2\" (UniqueName: \"kubernetes.io/projected/565eac29-daec-4b40-bcb7-751696560c3a-kube-api-access-25pj2\") pod \"placement-7555d68ddd-xqj8c\" (UID: \"565eac29-daec-4b40-bcb7-751696560c3a\") " pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.219477 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.227071 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.242721 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.345662 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.352735 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cd8bf58f4-ktsjk"] Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.497833 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.498224 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="cd961c7d-d551-4f5b-a08a-07d088947698" containerName="watcher-api-log" containerID="cri-o://4368969b33587354836e3cdb6c31f7bd645f510dc4e3c262614ef8da31c0eb92" gracePeriod=30 Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.498676 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="cd961c7d-d551-4f5b-a08a-07d088947698" containerName="watcher-api" containerID="cri-o://aa9a5e6b8de561c312023bc0224fd25e600c4ac446d6ec5ee19e031c464523e1" gracePeriod=30 Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.748659 4810 generic.go:334] "Generic (PLEG): container finished" podID="cd961c7d-d551-4f5b-a08a-07d088947698" containerID="4368969b33587354836e3cdb6c31f7bd645f510dc4e3c262614ef8da31c0eb92" exitCode=143 Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.748743 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cd961c7d-d551-4f5b-a08a-07d088947698","Type":"ContainerDied","Data":"4368969b33587354836e3cdb6c31f7bd645f510dc4e3c262614ef8da31c0eb92"} Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.758177 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cd8bf58f4-ktsjk" event={"ID":"95165d88-ea72-4785-8c1a-eea4d54466fb","Type":"ContainerStarted","Data":"aa41c53b974c1a8afd57bdec5addd138e6051cecaa12caee9704509f5c61518d"} Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.803377 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hmc6k" event={"ID":"2024a783-c3f9-4e57-b00f-52bec164e64e","Type":"ContainerStarted","Data":"ad28bfaa41efd8e4e6c465f81c081a0451a00386412156a5267acdd97840a40b"} Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.803448 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.803462 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.804128 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68ff886dc8-nntj6"] Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.839900 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-hmc6k" podStartSLOduration=4.306284117 podStartE2EDuration="46.839878099s" podCreationTimestamp="2026-02-19 15:28:48 +0000 UTC" firstStartedPulling="2026-02-19 15:28:50.998900665 +0000 UTC m=+1160.480930789" lastFinishedPulling="2026-02-19 15:29:33.532494647 +0000 UTC m=+1203.014524771" observedRunningTime="2026-02-19 15:29:34.829910204 +0000 UTC m=+1204.311940328" watchObservedRunningTime="2026-02-19 15:29:34.839878099 +0000 UTC m=+1204.321908223" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.850385 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.850433 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.965943 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.971467 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 15:29:34 crc kubenswrapper[4810]: I0219 15:29:34.979687 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf5f86dff-7482l"] Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.133857 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-695bb7cdc6-72zs2"] Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.281897 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7555d68ddd-xqj8c"] Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.377289 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cc67d5fc8-hs8lf"] Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.846211 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-695bb7cdc6-72zs2" event={"ID":"f0b73197-3c7e-44c3-8a49-35d9e0a40629","Type":"ContainerStarted","Data":"fc28d9c929a268b7ca57d72d22325d489071902ea7ff07035df3c9d98d6a728a"} Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.846675 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-695bb7cdc6-72zs2" event={"ID":"f0b73197-3c7e-44c3-8a49-35d9e0a40629","Type":"ContainerStarted","Data":"b18b549a747d3077270169b17318853029444572e9290aac2a4fb288910d92c3"} Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.849360 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cd8bf58f4-ktsjk" event={"ID":"95165d88-ea72-4785-8c1a-eea4d54466fb","Type":"ContainerStarted","Data":"969ebc400f1294f5e658b8b96332740cbd1b67f0be09a41e3a45f2e93ee26edb"} Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.850570 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.853899 4810 generic.go:334] "Generic (PLEG): container finished" podID="4f9534ee-827a-49fb-8588-a5a8a494be3c" containerID="77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095" exitCode=0 Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.853957 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" event={"ID":"4f9534ee-827a-49fb-8588-a5a8a494be3c","Type":"ContainerDied","Data":"77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095"} Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.853982 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" event={"ID":"4f9534ee-827a-49fb-8588-a5a8a494be3c","Type":"ContainerStarted","Data":"32fbc60f0dab31f943ed4d231a260fb910bacf46dc6517badaa0cc870b972e03"} Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.873774 4810 generic.go:334] "Generic (PLEG): container finished" podID="4be760b2-263c-4b89-8bdf-ecf98114a24f" containerID="8c99f71d93423dbb260d03502181a976def39a613d97465a06298513d67bb0bf" exitCode=0 Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.873833 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4be760b2-263c-4b89-8bdf-ecf98114a24f","Type":"ContainerDied","Data":"8c99f71d93423dbb260d03502181a976def39a613d97465a06298513d67bb0bf"} Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.875342 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cc67d5fc8-hs8lf" event={"ID":"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0","Type":"ContainerStarted","Data":"d1b99914aff75d6854dd18c44e4702b0f89c549bb21d23b94afcb4182e4386df"} Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.888040 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6cd8bf58f4-ktsjk" podStartSLOduration=2.888021922 podStartE2EDuration="2.888021922s" podCreationTimestamp="2026-02-19 15:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:35.883894651 +0000 UTC m=+1205.365924775" watchObservedRunningTime="2026-02-19 15:29:35.888021922 +0000 UTC m=+1205.370052066" Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.894742 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7555d68ddd-xqj8c" event={"ID":"565eac29-daec-4b40-bcb7-751696560c3a","Type":"ContainerStarted","Data":"c57b5ba5a394f063373e611f4d92175660e2a4ae7e14c603f3b3bd232203b455"} Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.937218 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68ff886dc8-nntj6" event={"ID":"e0116ca5-826e-4a77-bc6f-11e89c047af8","Type":"ContainerStarted","Data":"0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f"} Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.937273 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68ff886dc8-nntj6" event={"ID":"e0116ca5-826e-4a77-bc6f-11e89c047af8","Type":"ContainerStarted","Data":"4f57c96680174420000630479bd05d2a43d222e27a564e7d19f21df9961dd34c"} Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.938166 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 15:29:35 crc kubenswrapper[4810]: I0219 15:29:35.938200 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.364814 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.402072 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be760b2-263c-4b89-8bdf-ecf98114a24f-combined-ca-bundle\") pod \"4be760b2-263c-4b89-8bdf-ecf98114a24f\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.402201 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be760b2-263c-4b89-8bdf-ecf98114a24f-config-data\") pod \"4be760b2-263c-4b89-8bdf-ecf98114a24f\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.402226 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be760b2-263c-4b89-8bdf-ecf98114a24f-logs\") pod \"4be760b2-263c-4b89-8bdf-ecf98114a24f\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.402350 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjt9b\" (UniqueName: \"kubernetes.io/projected/4be760b2-263c-4b89-8bdf-ecf98114a24f-kube-api-access-kjt9b\") pod \"4be760b2-263c-4b89-8bdf-ecf98114a24f\" (UID: \"4be760b2-263c-4b89-8bdf-ecf98114a24f\") " Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.407644 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4be760b2-263c-4b89-8bdf-ecf98114a24f-logs" (OuterVolumeSpecName: "logs") pod "4be760b2-263c-4b89-8bdf-ecf98114a24f" (UID: "4be760b2-263c-4b89-8bdf-ecf98114a24f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.408940 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be760b2-263c-4b89-8bdf-ecf98114a24f-kube-api-access-kjt9b" (OuterVolumeSpecName: "kube-api-access-kjt9b") pod "4be760b2-263c-4b89-8bdf-ecf98114a24f" (UID: "4be760b2-263c-4b89-8bdf-ecf98114a24f"). InnerVolumeSpecName "kube-api-access-kjt9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.490451 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be760b2-263c-4b89-8bdf-ecf98114a24f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4be760b2-263c-4b89-8bdf-ecf98114a24f" (UID: "4be760b2-263c-4b89-8bdf-ecf98114a24f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.507223 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be760b2-263c-4b89-8bdf-ecf98114a24f-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.507266 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjt9b\" (UniqueName: \"kubernetes.io/projected/4be760b2-263c-4b89-8bdf-ecf98114a24f-kube-api-access-kjt9b\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.507281 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be760b2-263c-4b89-8bdf-ecf98114a24f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.537478 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be760b2-263c-4b89-8bdf-ecf98114a24f-config-data" (OuterVolumeSpecName: "config-data") pod "4be760b2-263c-4b89-8bdf-ecf98114a24f" (UID: "4be760b2-263c-4b89-8bdf-ecf98114a24f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.611231 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be760b2-263c-4b89-8bdf-ecf98114a24f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.965531 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68ff886dc8-nntj6" event={"ID":"e0116ca5-826e-4a77-bc6f-11e89c047af8","Type":"ContainerStarted","Data":"4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3"} Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.965829 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.965848 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.991711 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-695bb7cdc6-72zs2" event={"ID":"f0b73197-3c7e-44c3-8a49-35d9e0a40629","Type":"ContainerStarted","Data":"dc4301bc9761619133902901d1997ab38671d0d229f4feca664d5a1fa8e30e7a"} Feb 19 15:29:36 crc kubenswrapper[4810]: I0219 15:29:36.991799 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.016285 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-68ff886dc8-nntj6" podStartSLOduration=4.016265811 podStartE2EDuration="4.016265811s" podCreationTimestamp="2026-02-19 15:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:36.998006753 +0000 UTC m=+1206.480036877" watchObservedRunningTime="2026-02-19 15:29:37.016265811 +0000 UTC m=+1206.498295935" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.052587 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-695bb7cdc6-72zs2" podStartSLOduration=4.052563921 podStartE2EDuration="4.052563921s" podCreationTimestamp="2026-02-19 15:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:37.035731558 +0000 UTC m=+1206.517761672" watchObservedRunningTime="2026-02-19 15:29:37.052563921 +0000 UTC m=+1206.534594045" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.053624 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" event={"ID":"4f9534ee-827a-49fb-8588-a5a8a494be3c","Type":"ContainerStarted","Data":"f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963"} Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.053697 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.083245 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"4be760b2-263c-4b89-8bdf-ecf98114a24f","Type":"ContainerDied","Data":"69118f89362c4dfde4f05ac6e0fde30afdc3fb74ce3c8ffeaee8a6df7e8789a5"} Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.083304 4810 scope.go:117] "RemoveContainer" containerID="8c99f71d93423dbb260d03502181a976def39a613d97465a06298513d67bb0bf" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.083517 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.091860 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" podStartSLOduration=4.091837705 podStartE2EDuration="4.091837705s" podCreationTimestamp="2026-02-19 15:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:37.081720087 +0000 UTC m=+1206.563750211" watchObservedRunningTime="2026-02-19 15:29:37.091837705 +0000 UTC m=+1206.573867829" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.173656 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cc67d5fc8-hs8lf" event={"ID":"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0","Type":"ContainerStarted","Data":"4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737"} Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.191768 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.206803 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.223470 4810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.223509 4810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.224413 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7555d68ddd-xqj8c" event={"ID":"565eac29-daec-4b40-bcb7-751696560c3a","Type":"ContainerStarted","Data":"7bd22beb5a6519315081ec047ff7382f0fc2a07b39d0290f03b43dbfb6d54778"} Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.224483 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7555d68ddd-xqj8c" event={"ID":"565eac29-daec-4b40-bcb7-751696560c3a","Type":"ContainerStarted","Data":"84a7ee0e1e0aec17ff38a3db779574c02a5552a158b322fbf4958626bf04c841"} Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.233195 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:29:37 crc kubenswrapper[4810]: E0219 15:29:37.233654 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be760b2-263c-4b89-8bdf-ecf98114a24f" containerName="watcher-applier" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.233674 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be760b2-263c-4b89-8bdf-ecf98114a24f" containerName="watcher-applier" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.233880 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be760b2-263c-4b89-8bdf-ecf98114a24f" containerName="watcher-applier" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.234645 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.240556 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.290498 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.310257 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7555d68ddd-xqj8c" podStartSLOduration=4.310239463 podStartE2EDuration="4.310239463s" podCreationTimestamp="2026-02-19 15:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:37.24818163 +0000 UTC m=+1206.730211754" watchObservedRunningTime="2026-02-19 15:29:37.310239463 +0000 UTC m=+1206.792269587" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.314402 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-695bb7cdc6-72zs2"] Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.370893 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6dfcf65577-bd5w2"] Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.372634 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.379638 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.379808 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.380704 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dfcf65577-bd5w2"] Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.452708 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be760b2-263c-4b89-8bdf-ecf98114a24f" path="/var/lib/kubelet/pods/4be760b2-263c-4b89-8bdf-ecf98114a24f/volumes" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.469851 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea855ba-523c-4143-8fe8-b0b1150299d0-config-data\") pod \"watcher-applier-0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.469896 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f72k9\" (UniqueName: \"kubernetes.io/projected/2ea855ba-523c-4143-8fe8-b0b1150299d0-kube-api-access-f72k9\") pod \"watcher-applier-0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.469993 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea855ba-523c-4143-8fe8-b0b1150299d0-logs\") pod \"watcher-applier-0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.470013 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea855ba-523c-4143-8fe8-b0b1150299d0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.570994 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-config\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.571034 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t97g7\" (UniqueName: \"kubernetes.io/projected/6528bdfd-3389-4776-826e-164fc5117682-kube-api-access-t97g7\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.571056 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-httpd-config\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.571081 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-ovndb-tls-certs\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.571108 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea855ba-523c-4143-8fe8-b0b1150299d0-config-data\") pod \"watcher-applier-0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.571124 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f72k9\" (UniqueName: \"kubernetes.io/projected/2ea855ba-523c-4143-8fe8-b0b1150299d0-kube-api-access-f72k9\") pod \"watcher-applier-0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.571147 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-combined-ca-bundle\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.571174 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-public-tls-certs\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.571234 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea855ba-523c-4143-8fe8-b0b1150299d0-logs\") pod \"watcher-applier-0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.571250 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea855ba-523c-4143-8fe8-b0b1150299d0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.571313 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-internal-tls-certs\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.575513 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea855ba-523c-4143-8fe8-b0b1150299d0-logs\") pod \"watcher-applier-0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.576233 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea855ba-523c-4143-8fe8-b0b1150299d0-config-data\") pod \"watcher-applier-0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.576765 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea855ba-523c-4143-8fe8-b0b1150299d0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.595022 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f72k9\" (UniqueName: \"kubernetes.io/projected/2ea855ba-523c-4143-8fe8-b0b1150299d0-kube-api-access-f72k9\") pod \"watcher-applier-0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " pod="openstack/watcher-applier-0" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.653825 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cd961c7d-d551-4f5b-a08a-07d088947698" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.168:9322/\": read tcp 10.217.0.2:33186->10.217.0.168:9322: read: connection reset by peer" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.653902 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cd961c7d-d551-4f5b-a08a-07d088947698" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.168:9322/\": read tcp 10.217.0.2:33190->10.217.0.168:9322: read: connection reset by peer" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.672812 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-httpd-config\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.672859 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-ovndb-tls-certs\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.672895 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-combined-ca-bundle\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.672922 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-public-tls-certs\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.673023 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-internal-tls-certs\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.673044 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-config\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.673059 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t97g7\" (UniqueName: \"kubernetes.io/projected/6528bdfd-3389-4776-826e-164fc5117682-kube-api-access-t97g7\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.681794 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-internal-tls-certs\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.682992 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-combined-ca-bundle\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.686369 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-config\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.687053 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-ovndb-tls-certs\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.688041 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-public-tls-certs\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.688417 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6528bdfd-3389-4776-826e-164fc5117682-httpd-config\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.692903 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t97g7\" (UniqueName: \"kubernetes.io/projected/6528bdfd-3389-4776-826e-164fc5117682-kube-api-access-t97g7\") pod \"neutron-6dfcf65577-bd5w2\" (UID: \"6528bdfd-3389-4776-826e-164fc5117682\") " pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.702160 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:37 crc kubenswrapper[4810]: I0219 15:29:37.890649 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 15:29:38 crc kubenswrapper[4810]: I0219 15:29:38.166149 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-869f57798-ngdtl" podUID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 19 15:29:38 crc kubenswrapper[4810]: I0219 15:29:38.251873 4810 generic.go:334] "Generic (PLEG): container finished" podID="cd961c7d-d551-4f5b-a08a-07d088947698" containerID="aa9a5e6b8de561c312023bc0224fd25e600c4ac446d6ec5ee19e031c464523e1" exitCode=0 Feb 19 15:29:38 crc kubenswrapper[4810]: I0219 15:29:38.252706 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cd961c7d-d551-4f5b-a08a-07d088947698","Type":"ContainerDied","Data":"aa9a5e6b8de561c312023bc0224fd25e600c4ac446d6ec5ee19e031c464523e1"} Feb 19 15:29:38 crc kubenswrapper[4810]: I0219 15:29:38.252839 4810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 15:29:38 crc kubenswrapper[4810]: I0219 15:29:38.252854 4810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 15:29:38 crc kubenswrapper[4810]: I0219 15:29:38.253887 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:38 crc kubenswrapper[4810]: I0219 15:29:38.253910 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:29:38 crc kubenswrapper[4810]: I0219 15:29:38.274949 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f56498b8d-9gwmf" podUID="737d6629-747f-4d16-a545-d0070c20fe5d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.050096 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.051837 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dfcf65577-bd5w2"] Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.112004 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd961c7d-d551-4f5b-a08a-07d088947698-logs\") pod \"cd961c7d-d551-4f5b-a08a-07d088947698\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.112199 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-custom-prometheus-ca\") pod \"cd961c7d-d551-4f5b-a08a-07d088947698\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.112292 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-combined-ca-bundle\") pod \"cd961c7d-d551-4f5b-a08a-07d088947698\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.112370 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-config-data\") pod \"cd961c7d-d551-4f5b-a08a-07d088947698\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.112425 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdn2x\" (UniqueName: \"kubernetes.io/projected/cd961c7d-d551-4f5b-a08a-07d088947698-kube-api-access-zdn2x\") pod \"cd961c7d-d551-4f5b-a08a-07d088947698\" (UID: \"cd961c7d-d551-4f5b-a08a-07d088947698\") " Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.113708 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd961c7d-d551-4f5b-a08a-07d088947698-logs" (OuterVolumeSpecName: "logs") pod "cd961c7d-d551-4f5b-a08a-07d088947698" (UID: "cd961c7d-d551-4f5b-a08a-07d088947698"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.125536 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 15:29:39 crc kubenswrapper[4810]: E0219 15:29:39.129701 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631 is running failed: container process not found" containerID="ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.129890 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd961c7d-d551-4f5b-a08a-07d088947698-kube-api-access-zdn2x" (OuterVolumeSpecName: "kube-api-access-zdn2x") pod "cd961c7d-d551-4f5b-a08a-07d088947698" (UID: "cd961c7d-d551-4f5b-a08a-07d088947698"). InnerVolumeSpecName "kube-api-access-zdn2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:39 crc kubenswrapper[4810]: E0219 15:29:39.133445 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631 is running failed: container process not found" containerID="ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 19 15:29:39 crc kubenswrapper[4810]: E0219 15:29:39.137686 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631 is running failed: container process not found" containerID="ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 19 15:29:39 crc kubenswrapper[4810]: E0219 15:29:39.137770 4810 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631 is running failed: container process not found" probeType="Startup" pod="openstack/watcher-decision-engine-0" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.157039 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.186852 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd961c7d-d551-4f5b-a08a-07d088947698" (UID: "cd961c7d-d551-4f5b-a08a-07d088947698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.186929 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "cd961c7d-d551-4f5b-a08a-07d088947698" (UID: "cd961c7d-d551-4f5b-a08a-07d088947698"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.206535 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-config-data" (OuterVolumeSpecName: "config-data") pod "cd961c7d-d551-4f5b-a08a-07d088947698" (UID: "cd961c7d-d551-4f5b-a08a-07d088947698"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.214917 4810 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.214947 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.214959 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd961c7d-d551-4f5b-a08a-07d088947698-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.214972 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdn2x\" (UniqueName: \"kubernetes.io/projected/cd961c7d-d551-4f5b-a08a-07d088947698-kube-api-access-zdn2x\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.214983 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd961c7d-d551-4f5b-a08a-07d088947698-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.296433 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dfcf65577-bd5w2" event={"ID":"6528bdfd-3389-4776-826e-164fc5117682","Type":"ContainerStarted","Data":"8d4fcb3fc70457369ed906639e5d7920cbca2fb8876e0eee30e7e8eccacefa6c"} Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.305649 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-svmgl" event={"ID":"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5","Type":"ContainerStarted","Data":"61fee9f5cc97dc9164d9a8b37259645ec27704b544f8031e79cd8630294aa448"} Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.321965 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cd961c7d-d551-4f5b-a08a-07d088947698","Type":"ContainerDied","Data":"3ca7fe1f4f8bad9a3a06d89d4141ff28e32b10f6d3445d4b3f2404b6e71c942f"} Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.322173 4810 scope.go:117] "RemoveContainer" containerID="aa9a5e6b8de561c312023bc0224fd25e600c4ac446d6ec5ee19e031c464523e1" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.322410 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.346916 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cc67d5fc8-hs8lf" event={"ID":"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0","Type":"ContainerStarted","Data":"ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f"} Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.348150 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.354143 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"2ea855ba-523c-4143-8fe8-b0b1150299d0","Type":"ContainerStarted","Data":"1d01a4c95612c0c4f1b7f9b7042052db9fc19c8db413469f648bf9735bce00e6"} Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.375535 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-svmgl" podStartSLOduration=6.618609006 podStartE2EDuration="51.375517298s" podCreationTimestamp="2026-02-19 15:28:48 +0000 UTC" firstStartedPulling="2026-02-19 15:28:50.967593859 +0000 UTC m=+1160.449623973" lastFinishedPulling="2026-02-19 15:29:35.724502141 +0000 UTC m=+1205.206532265" observedRunningTime="2026-02-19 15:29:39.323075582 +0000 UTC m=+1208.805105706" watchObservedRunningTime="2026-02-19 15:29:39.375517298 +0000 UTC m=+1208.857547422" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.379133 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6cc67d5fc8-hs8lf" podStartSLOduration=6.379123227 podStartE2EDuration="6.379123227s" podCreationTimestamp="2026-02-19 15:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:39.369699225 +0000 UTC m=+1208.851729379" watchObservedRunningTime="2026-02-19 15:29:39.379123227 +0000 UTC m=+1208.861153351" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.380054 4810 generic.go:334] "Generic (PLEG): container finished" podID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerID="ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631" exitCode=1 Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.380146 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5","Type":"ContainerDied","Data":"ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631"} Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.381102 4810 scope.go:117] "RemoveContainer" containerID="ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631" Feb 19 15:29:39 crc kubenswrapper[4810]: E0219 15:29:39.381346 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(3eb2dccd-c5dc-436f-b7a6-954af7bc51c5)\"" pod="openstack/watcher-decision-engine-0" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.381923 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-695bb7cdc6-72zs2" podUID="f0b73197-3c7e-44c3-8a49-35d9e0a40629" containerName="neutron-api" containerID="cri-o://fc28d9c929a268b7ca57d72d22325d489071902ea7ff07035df3c9d98d6a728a" gracePeriod=30 Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.382085 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-695bb7cdc6-72zs2" podUID="f0b73197-3c7e-44c3-8a49-35d9e0a40629" containerName="neutron-httpd" containerID="cri-o://dc4301bc9761619133902901d1997ab38671d0d229f4feca664d5a1fa8e30e7a" gracePeriod=30 Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.509308 4810 scope.go:117] "RemoveContainer" containerID="4368969b33587354836e3cdb6c31f7bd645f510dc4e3c262614ef8da31c0eb92" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.593444 4810 scope.go:117] "RemoveContainer" containerID="f9469eea900aae9e94121b7851a3eacdafc431e0f15d0e8319f5a93a2616b851" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.918373 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 15:29:39 crc kubenswrapper[4810]: I0219 15:29:39.918460 4810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 15:29:40 crc kubenswrapper[4810]: I0219 15:29:40.068044 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 15:29:40 crc kubenswrapper[4810]: I0219 15:29:40.325115 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 15:29:40 crc kubenswrapper[4810]: I0219 15:29:40.325220 4810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 15:29:40 crc kubenswrapper[4810]: I0219 15:29:40.341400 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 15:29:40 crc kubenswrapper[4810]: I0219 15:29:40.437847 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"2ea855ba-523c-4143-8fe8-b0b1150299d0","Type":"ContainerStarted","Data":"9386dd3b59b24a748e770d6384d92f3e8aff8a701badb29067310dec0fb2fbb8"} Feb 19 15:29:40 crc kubenswrapper[4810]: I0219 15:29:40.455617 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dfcf65577-bd5w2" event={"ID":"6528bdfd-3389-4776-826e-164fc5117682","Type":"ContainerStarted","Data":"b8c272c13d1b92aba75222a0570054705e53f0081b56e6b23d6a0c0ea9d19e36"} Feb 19 15:29:40 crc kubenswrapper[4810]: I0219 15:29:40.455670 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dfcf65577-bd5w2" event={"ID":"6528bdfd-3389-4776-826e-164fc5117682","Type":"ContainerStarted","Data":"15e4052ed266d6ebf83cd909256437484aa3ff4ec25a466bae110f8eecff1146"} Feb 19 15:29:40 crc kubenswrapper[4810]: I0219 15:29:40.455711 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:29:40 crc kubenswrapper[4810]: I0219 15:29:40.468799 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.468780979 podStartE2EDuration="3.468780979s" podCreationTimestamp="2026-02-19 15:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:40.463673883 +0000 UTC m=+1209.945703997" watchObservedRunningTime="2026-02-19 15:29:40.468780979 +0000 UTC m=+1209.950811103" Feb 19 15:29:40 crc kubenswrapper[4810]: I0219 15:29:40.479864 4810 generic.go:334] "Generic (PLEG): container finished" podID="f0b73197-3c7e-44c3-8a49-35d9e0a40629" containerID="dc4301bc9761619133902901d1997ab38671d0d229f4feca664d5a1fa8e30e7a" exitCode=0 Feb 19 15:29:40 crc kubenswrapper[4810]: I0219 15:29:40.480996 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-695bb7cdc6-72zs2" event={"ID":"f0b73197-3c7e-44c3-8a49-35d9e0a40629","Type":"ContainerDied","Data":"dc4301bc9761619133902901d1997ab38671d0d229f4feca664d5a1fa8e30e7a"} Feb 19 15:29:40 crc kubenswrapper[4810]: I0219 15:29:40.490759 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6dfcf65577-bd5w2" podStartSLOduration=3.490743957 podStartE2EDuration="3.490743957s" podCreationTimestamp="2026-02-19 15:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:40.489013395 +0000 UTC m=+1209.971043519" watchObservedRunningTime="2026-02-19 15:29:40.490743957 +0000 UTC m=+1209.972774081" Feb 19 15:29:42 crc kubenswrapper[4810]: I0219 15:29:42.891162 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 19 15:29:43 crc kubenswrapper[4810]: I0219 15:29:43.527351 4810 generic.go:334] "Generic (PLEG): container finished" podID="2024a783-c3f9-4e57-b00f-52bec164e64e" containerID="ad28bfaa41efd8e4e6c465f81c081a0451a00386412156a5267acdd97840a40b" exitCode=0 Feb 19 15:29:43 crc kubenswrapper[4810]: I0219 15:29:43.527382 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hmc6k" event={"ID":"2024a783-c3f9-4e57-b00f-52bec164e64e","Type":"ContainerDied","Data":"ad28bfaa41efd8e4e6c465f81c081a0451a00386412156a5267acdd97840a40b"} Feb 19 15:29:43 crc kubenswrapper[4810]: I0219 15:29:43.854928 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:29:43 crc kubenswrapper[4810]: I0219 15:29:43.920503 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f4cfd6f6c-s7m64"] Feb 19 15:29:43 crc kubenswrapper[4810]: I0219 15:29:43.920719 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" podUID="18ca6546-69fd-492d-81c5-bb18c56b045d" containerName="dnsmasq-dns" containerID="cri-o://acf279e70fd332fdcdd2bf83f0303bb99e19cd10482ff8ab44a134fa747add8b" gracePeriod=10 Feb 19 15:29:44 crc kubenswrapper[4810]: I0219 15:29:44.555965 4810 generic.go:334] "Generic (PLEG): container finished" podID="18ca6546-69fd-492d-81c5-bb18c56b045d" containerID="acf279e70fd332fdcdd2bf83f0303bb99e19cd10482ff8ab44a134fa747add8b" exitCode=0 Feb 19 15:29:44 crc kubenswrapper[4810]: I0219 15:29:44.556460 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" event={"ID":"18ca6546-69fd-492d-81c5-bb18c56b045d","Type":"ContainerDied","Data":"acf279e70fd332fdcdd2bf83f0303bb99e19cd10482ff8ab44a134fa747add8b"} Feb 19 15:29:44 crc kubenswrapper[4810]: I0219 15:29:44.612800 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" podUID="18ca6546-69fd-492d-81c5-bb18c56b045d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Feb 19 15:29:45 crc kubenswrapper[4810]: I0219 15:29:45.956224 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hmc6k" Feb 19 15:29:46 crc kubenswrapper[4810]: I0219 15:29:46.076715 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2024a783-c3f9-4e57-b00f-52bec164e64e-db-sync-config-data\") pod \"2024a783-c3f9-4e57-b00f-52bec164e64e\" (UID: \"2024a783-c3f9-4e57-b00f-52bec164e64e\") " Feb 19 15:29:46 crc kubenswrapper[4810]: I0219 15:29:46.076803 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btmxg\" (UniqueName: \"kubernetes.io/projected/2024a783-c3f9-4e57-b00f-52bec164e64e-kube-api-access-btmxg\") pod \"2024a783-c3f9-4e57-b00f-52bec164e64e\" (UID: \"2024a783-c3f9-4e57-b00f-52bec164e64e\") " Feb 19 15:29:46 crc kubenswrapper[4810]: I0219 15:29:46.076876 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2024a783-c3f9-4e57-b00f-52bec164e64e-combined-ca-bundle\") pod \"2024a783-c3f9-4e57-b00f-52bec164e64e\" (UID: \"2024a783-c3f9-4e57-b00f-52bec164e64e\") " Feb 19 15:29:46 crc kubenswrapper[4810]: I0219 15:29:46.087530 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2024a783-c3f9-4e57-b00f-52bec164e64e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2024a783-c3f9-4e57-b00f-52bec164e64e" (UID: "2024a783-c3f9-4e57-b00f-52bec164e64e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:46 crc kubenswrapper[4810]: I0219 15:29:46.101849 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2024a783-c3f9-4e57-b00f-52bec164e64e-kube-api-access-btmxg" (OuterVolumeSpecName: "kube-api-access-btmxg") pod "2024a783-c3f9-4e57-b00f-52bec164e64e" (UID: "2024a783-c3f9-4e57-b00f-52bec164e64e"). InnerVolumeSpecName "kube-api-access-btmxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:46 crc kubenswrapper[4810]: I0219 15:29:46.108682 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2024a783-c3f9-4e57-b00f-52bec164e64e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2024a783-c3f9-4e57-b00f-52bec164e64e" (UID: "2024a783-c3f9-4e57-b00f-52bec164e64e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:46 crc kubenswrapper[4810]: I0219 15:29:46.181744 4810 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2024a783-c3f9-4e57-b00f-52bec164e64e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:46 crc kubenswrapper[4810]: I0219 15:29:46.181779 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btmxg\" (UniqueName: \"kubernetes.io/projected/2024a783-c3f9-4e57-b00f-52bec164e64e-kube-api-access-btmxg\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:46 crc kubenswrapper[4810]: I0219 15:29:46.181790 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2024a783-c3f9-4e57-b00f-52bec164e64e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:46 crc kubenswrapper[4810]: I0219 15:29:46.573044 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hmc6k" event={"ID":"2024a783-c3f9-4e57-b00f-52bec164e64e","Type":"ContainerDied","Data":"398a9dcb2e74ac56cd2827ea038790abeb16b5f6b3573a48b306842a166c3f44"} Feb 19 15:29:46 crc kubenswrapper[4810]: I0219 15:29:46.573347 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="398a9dcb2e74ac56cd2827ea038790abeb16b5f6b3573a48b306842a166c3f44" Feb 19 15:29:46 crc kubenswrapper[4810]: I0219 15:29:46.573397 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hmc6k" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.351400 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b44bbb4dc-dktkl"] Feb 19 15:29:47 crc kubenswrapper[4810]: E0219 15:29:47.351763 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2024a783-c3f9-4e57-b00f-52bec164e64e" containerName="barbican-db-sync" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.351776 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2024a783-c3f9-4e57-b00f-52bec164e64e" containerName="barbican-db-sync" Feb 19 15:29:47 crc kubenswrapper[4810]: E0219 15:29:47.351804 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd961c7d-d551-4f5b-a08a-07d088947698" containerName="watcher-api" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.351811 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd961c7d-d551-4f5b-a08a-07d088947698" containerName="watcher-api" Feb 19 15:29:47 crc kubenswrapper[4810]: E0219 15:29:47.351826 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd961c7d-d551-4f5b-a08a-07d088947698" containerName="watcher-api-log" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.351833 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd961c7d-d551-4f5b-a08a-07d088947698" containerName="watcher-api-log" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.352001 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2024a783-c3f9-4e57-b00f-52bec164e64e" containerName="barbican-db-sync" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.352029 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd961c7d-d551-4f5b-a08a-07d088947698" containerName="watcher-api-log" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.352040 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd961c7d-d551-4f5b-a08a-07d088947698" containerName="watcher-api" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.352988 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.398387 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-58f8775989-n9rgr"] Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.400008 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.415957 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b44bbb4dc-dktkl"] Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.416368 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.416564 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mx2zj" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.420779 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.426523 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-75f99f68b4-d7hj4"] Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.428088 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.431112 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.466810 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58f8775989-n9rgr"] Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.466854 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-75f99f68b4-d7hj4"] Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.518135 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-logs\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.518194 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-dns-swift-storage-0\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.518356 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-config-data-custom\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.519730 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-config-data\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.519833 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-dns-svc\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.519853 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-ovsdbserver-nb\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.519908 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-config\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.519952 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-ovsdbserver-sb\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.520013 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m85jt\" (UniqueName: \"kubernetes.io/projected/b1f44651-e4eb-4cce-a493-9dd9b491b22a-kube-api-access-m85jt\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.520055 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-combined-ca-bundle\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.520104 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvx44\" (UniqueName: \"kubernetes.io/projected/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-kube-api-access-rvx44\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.582528 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-79fc56bc44-tfjh4"] Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.584264 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.588261 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.603388 4810 generic.go:334] "Generic (PLEG): container finished" podID="848dfe9d-05f4-4ba9-919e-23e9a7ae63d5" containerID="61fee9f5cc97dc9164d9a8b37259645ec27704b544f8031e79cd8630294aa448" exitCode=0 Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.603434 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-svmgl" event={"ID":"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5","Type":"ContainerDied","Data":"61fee9f5cc97dc9164d9a8b37259645ec27704b544f8031e79cd8630294aa448"} Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621485 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-config\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621537 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c008ffcd-bb96-47dd-a311-fdc58f6d8918-config-data\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621570 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-ovsdbserver-sb\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621612 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m85jt\" (UniqueName: \"kubernetes.io/projected/b1f44651-e4eb-4cce-a493-9dd9b491b22a-kube-api-access-m85jt\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621644 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-combined-ca-bundle\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621678 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvx44\" (UniqueName: \"kubernetes.io/projected/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-kube-api-access-rvx44\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621718 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c008ffcd-bb96-47dd-a311-fdc58f6d8918-config-data-custom\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621748 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-logs\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621786 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-dns-swift-storage-0\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621834 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kgxx\" (UniqueName: \"kubernetes.io/projected/c008ffcd-bb96-47dd-a311-fdc58f6d8918-kube-api-access-4kgxx\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621913 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-config-data-custom\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621937 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-config-data\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621965 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c008ffcd-bb96-47dd-a311-fdc58f6d8918-combined-ca-bundle\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.621994 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c008ffcd-bb96-47dd-a311-fdc58f6d8918-logs\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.622092 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-dns-svc\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.622115 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-ovsdbserver-nb\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.623143 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-ovsdbserver-nb\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.623829 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-dns-swift-storage-0\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.624618 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-config\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.625248 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-ovsdbserver-sb\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.631870 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-config-data-custom\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.635976 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79fc56bc44-tfjh4"] Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.636033 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-logs\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.637559 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-dns-svc\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.639477 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-combined-ca-bundle\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.643425 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-config-data\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.646186 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m85jt\" (UniqueName: \"kubernetes.io/projected/b1f44651-e4eb-4cce-a493-9dd9b491b22a-kube-api-access-m85jt\") pod \"dnsmasq-dns-5b44bbb4dc-dktkl\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.656776 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvx44\" (UniqueName: \"kubernetes.io/projected/f277c31b-ff97-4f3b-aec3-c5cfe9293d60-kube-api-access-rvx44\") pod \"barbican-worker-58f8775989-n9rgr\" (UID: \"f277c31b-ff97-4f3b-aec3-c5cfe9293d60\") " pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.699401 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.724014 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c008ffcd-bb96-47dd-a311-fdc58f6d8918-config-data\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.724116 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-config-data-custom\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.724146 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf01cf3-b74b-46d8-b589-05ea0195ac24-logs\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.724166 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c008ffcd-bb96-47dd-a311-fdc58f6d8918-config-data-custom\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.724209 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kgxx\" (UniqueName: \"kubernetes.io/projected/c008ffcd-bb96-47dd-a311-fdc58f6d8918-kube-api-access-4kgxx\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.724232 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlkqn\" (UniqueName: \"kubernetes.io/projected/faf01cf3-b74b-46d8-b589-05ea0195ac24-kube-api-access-dlkqn\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.724253 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-config-data\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.724286 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-combined-ca-bundle\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.724335 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c008ffcd-bb96-47dd-a311-fdc58f6d8918-combined-ca-bundle\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.724357 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c008ffcd-bb96-47dd-a311-fdc58f6d8918-logs\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.724759 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c008ffcd-bb96-47dd-a311-fdc58f6d8918-logs\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.727883 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c008ffcd-bb96-47dd-a311-fdc58f6d8918-combined-ca-bundle\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.729652 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c008ffcd-bb96-47dd-a311-fdc58f6d8918-config-data\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.731030 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c008ffcd-bb96-47dd-a311-fdc58f6d8918-config-data-custom\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.740342 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kgxx\" (UniqueName: \"kubernetes.io/projected/c008ffcd-bb96-47dd-a311-fdc58f6d8918-kube-api-access-4kgxx\") pod \"barbican-keystone-listener-75f99f68b4-d7hj4\" (UID: \"c008ffcd-bb96-47dd-a311-fdc58f6d8918\") " pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.761352 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58f8775989-n9rgr" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.790619 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.826715 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-config-data-custom\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.826773 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf01cf3-b74b-46d8-b589-05ea0195ac24-logs\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.826872 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlkqn\" (UniqueName: \"kubernetes.io/projected/faf01cf3-b74b-46d8-b589-05ea0195ac24-kube-api-access-dlkqn\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.826933 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-config-data\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.826989 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-combined-ca-bundle\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.827313 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf01cf3-b74b-46d8-b589-05ea0195ac24-logs\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.832879 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-config-data-custom\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.833502 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-combined-ca-bundle\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.833645 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-config-data\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.842534 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlkqn\" (UniqueName: \"kubernetes.io/projected/faf01cf3-b74b-46d8-b589-05ea0195ac24-kube-api-access-dlkqn\") pod \"barbican-api-79fc56bc44-tfjh4\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.891773 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.906981 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:47 crc kubenswrapper[4810]: I0219 15:29:47.926722 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.153773 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.341922 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-ovsdbserver-nb\") pod \"18ca6546-69fd-492d-81c5-bb18c56b045d\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.342035 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nljdr\" (UniqueName: \"kubernetes.io/projected/18ca6546-69fd-492d-81c5-bb18c56b045d-kube-api-access-nljdr\") pod \"18ca6546-69fd-492d-81c5-bb18c56b045d\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.342130 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-ovsdbserver-sb\") pod \"18ca6546-69fd-492d-81c5-bb18c56b045d\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.342159 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-dns-swift-storage-0\") pod \"18ca6546-69fd-492d-81c5-bb18c56b045d\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.342231 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-dns-svc\") pod \"18ca6546-69fd-492d-81c5-bb18c56b045d\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.342301 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-config\") pod \"18ca6546-69fd-492d-81c5-bb18c56b045d\" (UID: \"18ca6546-69fd-492d-81c5-bb18c56b045d\") " Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.351619 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ca6546-69fd-492d-81c5-bb18c56b045d-kube-api-access-nljdr" (OuterVolumeSpecName: "kube-api-access-nljdr") pod "18ca6546-69fd-492d-81c5-bb18c56b045d" (UID: "18ca6546-69fd-492d-81c5-bb18c56b045d"). InnerVolumeSpecName "kube-api-access-nljdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.410452 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "18ca6546-69fd-492d-81c5-bb18c56b045d" (UID: "18ca6546-69fd-492d-81c5-bb18c56b045d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.423019 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-config" (OuterVolumeSpecName: "config") pod "18ca6546-69fd-492d-81c5-bb18c56b045d" (UID: "18ca6546-69fd-492d-81c5-bb18c56b045d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.444129 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.444156 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.444166 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nljdr\" (UniqueName: \"kubernetes.io/projected/18ca6546-69fd-492d-81c5-bb18c56b045d-kube-api-access-nljdr\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.473015 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "18ca6546-69fd-492d-81c5-bb18c56b045d" (UID: "18ca6546-69fd-492d-81c5-bb18c56b045d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.496041 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18ca6546-69fd-492d-81c5-bb18c56b045d" (UID: "18ca6546-69fd-492d-81c5-bb18c56b045d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.496762 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "18ca6546-69fd-492d-81c5-bb18c56b045d" (UID: "18ca6546-69fd-492d-81c5-bb18c56b045d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.545543 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.545568 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.545578 4810 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/18ca6546-69fd-492d-81c5-bb18c56b045d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:48 crc kubenswrapper[4810]: E0219 15:29:48.548078 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.614233 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" event={"ID":"18ca6546-69fd-492d-81c5-bb18c56b045d","Type":"ContainerDied","Data":"3e6a529b000841e709c2e1d05c4d119a28c8f20c2ece39574181d8df78a6c626"} Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.614279 4810 scope.go:117] "RemoveContainer" containerID="acf279e70fd332fdcdd2bf83f0303bb99e19cd10482ff8ab44a134fa747add8b" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.614403 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4cfd6f6c-s7m64" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.622417 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerName="ceilometer-notification-agent" containerID="cri-o://48bd8312dc5f2e91c1a4d6b015bb83960b232d7ff3a764add13cbac66bd0441f" gracePeriod=30 Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.622704 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8cd44d0-7395-44e1-9112-9e8bb4198b93","Type":"ContainerStarted","Data":"f669c7fcc5250fb7338c38806127e385c8e63f008e071b7e746e08ad63a54a09"} Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.623900 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.627552 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerName="proxy-httpd" containerID="cri-o://f669c7fcc5250fb7338c38806127e385c8e63f008e071b7e746e08ad63a54a09" gracePeriod=30 Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.627642 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerName="sg-core" containerID="cri-o://1b6e83705ca6e6c238d2cc26ae7440d08d4c7c41779dea1e88d4da0c7c6c4ca7" gracePeriod=30 Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.704103 4810 scope.go:117] "RemoveContainer" containerID="7bc85bb988b4afd58694705b3dae68dca31213574b00f92c71f4c77c5edfdf98" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.704298 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.801023 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79fc56bc44-tfjh4"] Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.835243 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b44bbb4dc-dktkl"] Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.847365 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f4cfd6f6c-s7m64"] Feb 19 15:29:48 crc kubenswrapper[4810]: I0219 15:29:48.880029 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f4cfd6f6c-s7m64"] Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.125860 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.126246 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.126261 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.127123 4810 scope.go:117] "RemoveContainer" containerID="ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.303777 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58f8775989-n9rgr"] Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.337153 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-75f99f68b4-d7hj4"] Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.476082 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18ca6546-69fd-492d-81c5-bb18c56b045d" path="/var/lib/kubelet/pods/18ca6546-69fd-492d-81c5-bb18c56b045d/volumes" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.506000 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-svmgl" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.584457 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-config-data\") pod \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.584780 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-db-sync-config-data\") pod \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.584808 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-etc-machine-id\") pod \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.584823 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-scripts\") pod \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.584853 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqnf7\" (UniqueName: \"kubernetes.io/projected/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-kube-api-access-cqnf7\") pod \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.584950 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "848dfe9d-05f4-4ba9-919e-23e9a7ae63d5" (UID: "848dfe9d-05f4-4ba9-919e-23e9a7ae63d5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.584981 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-combined-ca-bundle\") pod \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\" (UID: \"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5\") " Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.585507 4810 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.591907 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-scripts" (OuterVolumeSpecName: "scripts") pod "848dfe9d-05f4-4ba9-919e-23e9a7ae63d5" (UID: "848dfe9d-05f4-4ba9-919e-23e9a7ae63d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.594671 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-kube-api-access-cqnf7" (OuterVolumeSpecName: "kube-api-access-cqnf7") pod "848dfe9d-05f4-4ba9-919e-23e9a7ae63d5" (UID: "848dfe9d-05f4-4ba9-919e-23e9a7ae63d5"). InnerVolumeSpecName "kube-api-access-cqnf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.594699 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "848dfe9d-05f4-4ba9-919e-23e9a7ae63d5" (UID: "848dfe9d-05f4-4ba9-919e-23e9a7ae63d5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.623455 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "848dfe9d-05f4-4ba9-919e-23e9a7ae63d5" (UID: "848dfe9d-05f4-4ba9-919e-23e9a7ae63d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.654976 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58f8775989-n9rgr" event={"ID":"f277c31b-ff97-4f3b-aec3-c5cfe9293d60","Type":"ContainerStarted","Data":"a883cebc3822bfccdfab0bd32dc52bbbd5c9613c2189b603992822c3157d2886"} Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.657967 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-config-data" (OuterVolumeSpecName: "config-data") pod "848dfe9d-05f4-4ba9-919e-23e9a7ae63d5" (UID: "848dfe9d-05f4-4ba9-919e-23e9a7ae63d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.666789 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" event={"ID":"c008ffcd-bb96-47dd-a311-fdc58f6d8918","Type":"ContainerStarted","Data":"3e52e508c6501d7dabbd530b368b25c699055934d491accb25c6d0cceddd787e"} Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.690968 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5","Type":"ContainerStarted","Data":"a8232fee767eaee3fa026223e20673e25fa6d954ddb913e148bf6e0e435de416"} Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.695270 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.695565 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.695578 4810 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.695589 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.695599 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqnf7\" (UniqueName: \"kubernetes.io/projected/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5-kube-api-access-cqnf7\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.721619 4810 generic.go:334] "Generic (PLEG): container finished" podID="b1f44651-e4eb-4cce-a493-9dd9b491b22a" containerID="b192a1bf8058da9cccb4f9c8a3c59226e8d40284e5443eca3683d4b0120ebdd4" exitCode=0 Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.721642 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" event={"ID":"b1f44651-e4eb-4cce-a493-9dd9b491b22a","Type":"ContainerDied","Data":"b192a1bf8058da9cccb4f9c8a3c59226e8d40284e5443eca3683d4b0120ebdd4"} Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.751523 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" event={"ID":"b1f44651-e4eb-4cce-a493-9dd9b491b22a","Type":"ContainerStarted","Data":"d03f7f66a9a3b08a44840b959b3f1bdcaeb2f46d803e33b39c92670ff95677e9"} Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.760926 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fc56bc44-tfjh4" event={"ID":"faf01cf3-b74b-46d8-b589-05ea0195ac24","Type":"ContainerStarted","Data":"800222f8de7a26806971959afe00821e5af0eb096530f9abfdb9f64bd62612cb"} Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.760979 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fc56bc44-tfjh4" event={"ID":"faf01cf3-b74b-46d8-b589-05ea0195ac24","Type":"ContainerStarted","Data":"6c0cc91325e6d2565c8ddd9b1eede9d65e56a0efbeee132cbd5a27f02f0ed7ef"} Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.760995 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fc56bc44-tfjh4" event={"ID":"faf01cf3-b74b-46d8-b589-05ea0195ac24","Type":"ContainerStarted","Data":"9a79a3684682be966eafd1448e2f56c188e39ccc6a243e2540b5912c1bbf9c6c"} Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.761271 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.762124 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.765089 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-svmgl" event={"ID":"848dfe9d-05f4-4ba9-919e-23e9a7ae63d5","Type":"ContainerDied","Data":"ce811eb96390a8f8365be6dee0926b85f5365cb868957ed22165ebbf7d343712"} Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.765145 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce811eb96390a8f8365be6dee0926b85f5365cb868957ed22165ebbf7d343712" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.765260 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-svmgl" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.774135 4810 generic.go:334] "Generic (PLEG): container finished" podID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerID="1b6e83705ca6e6c238d2cc26ae7440d08d4c7c41779dea1e88d4da0c7c6c4ca7" exitCode=2 Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.774174 4810 generic.go:334] "Generic (PLEG): container finished" podID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerID="48bd8312dc5f2e91c1a4d6b015bb83960b232d7ff3a764add13cbac66bd0441f" exitCode=0 Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.774245 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8cd44d0-7395-44e1-9112-9e8bb4198b93","Type":"ContainerDied","Data":"1b6e83705ca6e6c238d2cc26ae7440d08d4c7c41779dea1e88d4da0c7c6c4ca7"} Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.774305 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8cd44d0-7395-44e1-9112-9e8bb4198b93","Type":"ContainerDied","Data":"48bd8312dc5f2e91c1a4d6b015bb83960b232d7ff3a764add13cbac66bd0441f"} Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.933420 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 15:29:49 crc kubenswrapper[4810]: E0219 15:29:49.933881 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848dfe9d-05f4-4ba9-919e-23e9a7ae63d5" containerName="cinder-db-sync" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.933900 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="848dfe9d-05f4-4ba9-919e-23e9a7ae63d5" containerName="cinder-db-sync" Feb 19 15:29:49 crc kubenswrapper[4810]: E0219 15:29:49.933921 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ca6546-69fd-492d-81c5-bb18c56b045d" containerName="init" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.933930 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ca6546-69fd-492d-81c5-bb18c56b045d" containerName="init" Feb 19 15:29:49 crc kubenswrapper[4810]: E0219 15:29:49.933946 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ca6546-69fd-492d-81c5-bb18c56b045d" containerName="dnsmasq-dns" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.933954 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ca6546-69fd-492d-81c5-bb18c56b045d" containerName="dnsmasq-dns" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.934141 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="18ca6546-69fd-492d-81c5-bb18c56b045d" containerName="dnsmasq-dns" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.934163 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="848dfe9d-05f4-4ba9-919e-23e9a7ae63d5" containerName="cinder-db-sync" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.935365 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.942227 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.942446 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.942589 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.942699 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nxm2z" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.967654 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-79fc56bc44-tfjh4" podStartSLOduration=2.967636306 podStartE2EDuration="2.967636306s" podCreationTimestamp="2026-02-19 15:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:49.890548885 +0000 UTC m=+1219.372579009" watchObservedRunningTime="2026-02-19 15:29:49.967636306 +0000 UTC m=+1219.449666430" Feb 19 15:29:49 crc kubenswrapper[4810]: I0219 15:29:49.973025 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.034394 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b44bbb4dc-dktkl"] Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.051602 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-scripts\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.051645 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.051672 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.051712 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3bd9969-3750-460b-95cd-8c52d2e44d82-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.051729 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-config-data\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.051809 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw97l\" (UniqueName: \"kubernetes.io/projected/e3bd9969-3750-460b-95cd-8c52d2e44d82-kube-api-access-xw97l\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.113282 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c58b86477-9tbw7"] Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.114990 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.122210 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c58b86477-9tbw7"] Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.153441 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw97l\" (UniqueName: \"kubernetes.io/projected/e3bd9969-3750-460b-95cd-8c52d2e44d82-kube-api-access-xw97l\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.153529 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-scripts\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.153547 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.153568 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.153603 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3bd9969-3750-460b-95cd-8c52d2e44d82-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.153620 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-config-data\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.157484 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-config-data\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.160025 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.161264 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3bd9969-3750-460b-95cd-8c52d2e44d82-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.162630 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.166232 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-scripts\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.191357 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw97l\" (UniqueName: \"kubernetes.io/projected/e3bd9969-3750-460b-95cd-8c52d2e44d82-kube-api-access-xw97l\") pod \"cinder-scheduler-0\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.211562 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.213143 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.215842 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.255027 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-ovsdbserver-nb\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.255071 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-ovsdbserver-sb\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.255140 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-dns-swift-storage-0\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.255173 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-dns-svc\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.255211 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-config\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.255229 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2hfc\" (UniqueName: \"kubernetes.io/projected/4a48946e-058c-4395-bbad-5effb50b2228-kube-api-access-v2hfc\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.260462 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 15:29:50 crc kubenswrapper[4810]: E0219 15:29:50.358315 4810 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/99d74501f5843fd610be3a49ec1837e9345737eecaaac0265ae9e4dfbf8c5c14/diff" to get inode usage: stat /var/lib/containers/storage/overlay/99d74501f5843fd610be3a49ec1837e9345737eecaaac0265ae9e4dfbf8c5c14/diff: no such file or directory, extraDiskErr: Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.359508 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f5db557a-0b89-4a02-b1b2-19bc205acee8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.359549 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.359590 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-dns-swift-storage-0\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.359616 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-dns-svc\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.359643 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7vrh\" (UniqueName: \"kubernetes.io/projected/f5db557a-0b89-4a02-b1b2-19bc205acee8-kube-api-access-s7vrh\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.359670 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-config\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.359686 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2hfc\" (UniqueName: \"kubernetes.io/projected/4a48946e-058c-4395-bbad-5effb50b2228-kube-api-access-v2hfc\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.359709 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5db557a-0b89-4a02-b1b2-19bc205acee8-logs\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.359753 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-config-data\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.359771 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-config-data-custom\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.359792 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-ovsdbserver-nb\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.359811 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-scripts\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.359827 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-ovsdbserver-sb\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.360731 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-dns-swift-storage-0\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.361435 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-dns-svc\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.361893 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-ovsdbserver-sb\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.362068 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-config\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.362606 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-ovsdbserver-nb\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.380130 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2hfc\" (UniqueName: \"kubernetes.io/projected/4a48946e-058c-4395-bbad-5effb50b2228-kube-api-access-v2hfc\") pod \"dnsmasq-dns-6c58b86477-9tbw7\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.402090 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.461742 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-scripts\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.461821 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f5db557a-0b89-4a02-b1b2-19bc205acee8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.461848 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.461934 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7vrh\" (UniqueName: \"kubernetes.io/projected/f5db557a-0b89-4a02-b1b2-19bc205acee8-kube-api-access-s7vrh\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.461979 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5db557a-0b89-4a02-b1b2-19bc205acee8-logs\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.462016 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f5db557a-0b89-4a02-b1b2-19bc205acee8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.462027 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-config-data\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.462118 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-config-data-custom\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.462611 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5db557a-0b89-4a02-b1b2-19bc205acee8-logs\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.465296 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.485069 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.486213 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-config-data\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.487525 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-scripts\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.487792 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-config-data-custom\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.497588 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7vrh\" (UniqueName: \"kubernetes.io/projected/f5db557a-0b89-4a02-b1b2-19bc205acee8-kube-api-access-s7vrh\") pod \"cinder-api-0\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: I0219 15:29:50.540313 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 15:29:50 crc kubenswrapper[4810]: E0219 15:29:50.573521 4810 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 19 15:29:50 crc kubenswrapper[4810]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/b1f44651-e4eb-4cce-a493-9dd9b491b22a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 15:29:50 crc kubenswrapper[4810]: > podSandboxID="d03f7f66a9a3b08a44840b959b3f1bdcaeb2f46d803e33b39c92670ff95677e9" Feb 19 15:29:50 crc kubenswrapper[4810]: E0219 15:29:50.573787 4810 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 15:29:50 crc kubenswrapper[4810]: container &Container{Name:dnsmasq-dns,Image:38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n4h578h588h57ch687h559h5f4h5ffh659h594h667h666h588h679hdh56h55hfhbh5f7h67fh5c4h645hd9h684h689h555h549h646h58bh546h664q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m85jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5b44bbb4dc-dktkl_openstack(b1f44651-e4eb-4cce-a493-9dd9b491b22a): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/b1f44651-e4eb-4cce-a493-9dd9b491b22a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 15:29:50 crc kubenswrapper[4810]: > logger="UnhandledError" Feb 19 15:29:50 crc kubenswrapper[4810]: E0219 15:29:50.575178 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/b1f44651-e4eb-4cce-a493-9dd9b491b22a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" podUID="b1f44651-e4eb-4cce-a493-9dd9b491b22a" Feb 19 15:29:51 crc kubenswrapper[4810]: I0219 15:29:51.031728 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 15:29:51 crc kubenswrapper[4810]: I0219 15:29:51.215251 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c58b86477-9tbw7"] Feb 19 15:29:51 crc kubenswrapper[4810]: I0219 15:29:51.318668 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:29:51 crc kubenswrapper[4810]: I0219 15:29:51.435350 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 15:29:51 crc kubenswrapper[4810]: I0219 15:29:51.747029 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.202126 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.233998 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-dns-svc\") pod \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.234528 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-ovsdbserver-nb\") pod \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.234700 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-config\") pod \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.234875 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m85jt\" (UniqueName: \"kubernetes.io/projected/b1f44651-e4eb-4cce-a493-9dd9b491b22a-kube-api-access-m85jt\") pod \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.234936 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-dns-swift-storage-0\") pod \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.235057 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-ovsdbserver-sb\") pod \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\" (UID: \"b1f44651-e4eb-4cce-a493-9dd9b491b22a\") " Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.254434 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f44651-e4eb-4cce-a493-9dd9b491b22a-kube-api-access-m85jt" (OuterVolumeSpecName: "kube-api-access-m85jt") pod "b1f44651-e4eb-4cce-a493-9dd9b491b22a" (UID: "b1f44651-e4eb-4cce-a493-9dd9b491b22a"). InnerVolumeSpecName "kube-api-access-m85jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.326693 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b1f44651-e4eb-4cce-a493-9dd9b491b22a" (UID: "b1f44651-e4eb-4cce-a493-9dd9b491b22a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.328857 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b1f44651-e4eb-4cce-a493-9dd9b491b22a" (UID: "b1f44651-e4eb-4cce-a493-9dd9b491b22a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.337556 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m85jt\" (UniqueName: \"kubernetes.io/projected/b1f44651-e4eb-4cce-a493-9dd9b491b22a-kube-api-access-m85jt\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.337591 4810 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.337602 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.338945 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b1f44651-e4eb-4cce-a493-9dd9b491b22a" (UID: "b1f44651-e4eb-4cce-a493-9dd9b491b22a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.348870 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-config" (OuterVolumeSpecName: "config") pod "b1f44651-e4eb-4cce-a493-9dd9b491b22a" (UID: "b1f44651-e4eb-4cce-a493-9dd9b491b22a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.367634 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1f44651-e4eb-4cce-a493-9dd9b491b22a" (UID: "b1f44651-e4eb-4cce-a493-9dd9b491b22a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.439593 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.439684 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.439706 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1f44651-e4eb-4cce-a493-9dd9b491b22a-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.819310 4810 generic.go:334] "Generic (PLEG): container finished" podID="08eca88c-a4b4-461b-8568-ebbf54645272" containerID="11bb73290ec186744ef4e88375d87c281860032daa137cc8bd4779ab70117f2b" exitCode=137 Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.819384 4810 generic.go:334] "Generic (PLEG): container finished" podID="08eca88c-a4b4-461b-8568-ebbf54645272" containerID="cc512cef3a5d05465ad16d8e6be38874fe910640b0911d914b7146b795d387d4" exitCode=137 Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.819458 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-668f7d7fb5-l5kpq" event={"ID":"08eca88c-a4b4-461b-8568-ebbf54645272","Type":"ContainerDied","Data":"11bb73290ec186744ef4e88375d87c281860032daa137cc8bd4779ab70117f2b"} Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.819501 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-668f7d7fb5-l5kpq" event={"ID":"08eca88c-a4b4-461b-8568-ebbf54645272","Type":"ContainerDied","Data":"cc512cef3a5d05465ad16d8e6be38874fe910640b0911d914b7146b795d387d4"} Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.820822 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" event={"ID":"4a48946e-058c-4395-bbad-5effb50b2228","Type":"ContainerStarted","Data":"bec8edc98672f19301835694ed5c49318c10a4c5634dfa9bc2728f6b7541a7a3"} Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.821789 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3bd9969-3750-460b-95cd-8c52d2e44d82","Type":"ContainerStarted","Data":"3b1b3f3010d8c20d2061253491ade333f69730534532a92038800b9cbbc0aede"} Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.823709 4810 generic.go:334] "Generic (PLEG): container finished" podID="c39b1dd9-9e73-4cca-aea6-e228f1ba5942" containerID="267e2a2aa6b67d8303b6404df33bfa4941b4f403604fb949cf5ec932e82ab1b7" exitCode=137 Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.823748 4810 generic.go:334] "Generic (PLEG): container finished" podID="c39b1dd9-9e73-4cca-aea6-e228f1ba5942" containerID="09ef2e9038e53c0d3694b2f3cb1b25543038065797746360e5b00060a157df5b" exitCode=137 Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.823807 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-849c785789-5xrh2" event={"ID":"c39b1dd9-9e73-4cca-aea6-e228f1ba5942","Type":"ContainerDied","Data":"267e2a2aa6b67d8303b6404df33bfa4941b4f403604fb949cf5ec932e82ab1b7"} Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.823856 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-849c785789-5xrh2" event={"ID":"c39b1dd9-9e73-4cca-aea6-e228f1ba5942","Type":"ContainerDied","Data":"09ef2e9038e53c0d3694b2f3cb1b25543038065797746360e5b00060a157df5b"} Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.825695 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" event={"ID":"b1f44651-e4eb-4cce-a493-9dd9b491b22a","Type":"ContainerDied","Data":"d03f7f66a9a3b08a44840b959b3f1bdcaeb2f46d803e33b39c92670ff95677e9"} Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.825738 4810 scope.go:117] "RemoveContainer" containerID="b192a1bf8058da9cccb4f9c8a3c59226e8d40284e5443eca3683d4b0120ebdd4" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.825766 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b44bbb4dc-dktkl" Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.826897 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f5db557a-0b89-4a02-b1b2-19bc205acee8","Type":"ContainerStarted","Data":"cb5c2099652d015df95731d45206e3f87ff0597889d9ca17b5c94ba96cec083c"} Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.914096 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b44bbb4dc-dktkl"] Feb 19 15:29:52 crc kubenswrapper[4810]: I0219 15:29:52.924656 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b44bbb4dc-dktkl"] Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.493145 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1f44651-e4eb-4cce-a493-9dd9b491b22a" path="/var/lib/kubelet/pods/b1f44651-e4eb-4cce-a493-9dd9b491b22a/volumes" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.565494 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.658875 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.675891 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08eca88c-a4b4-461b-8568-ebbf54645272-logs\") pod \"08eca88c-a4b4-461b-8568-ebbf54645272\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.675972 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/08eca88c-a4b4-461b-8568-ebbf54645272-horizon-secret-key\") pod \"08eca88c-a4b4-461b-8568-ebbf54645272\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.676049 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb5qb\" (UniqueName: \"kubernetes.io/projected/08eca88c-a4b4-461b-8568-ebbf54645272-kube-api-access-wb5qb\") pod \"08eca88c-a4b4-461b-8568-ebbf54645272\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.676178 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08eca88c-a4b4-461b-8568-ebbf54645272-scripts\") pod \"08eca88c-a4b4-461b-8568-ebbf54645272\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.676248 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08eca88c-a4b4-461b-8568-ebbf54645272-config-data\") pod \"08eca88c-a4b4-461b-8568-ebbf54645272\" (UID: \"08eca88c-a4b4-461b-8568-ebbf54645272\") " Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.683959 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08eca88c-a4b4-461b-8568-ebbf54645272-logs" (OuterVolumeSpecName: "logs") pod "08eca88c-a4b4-461b-8568-ebbf54645272" (UID: "08eca88c-a4b4-461b-8568-ebbf54645272"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.703922 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08eca88c-a4b4-461b-8568-ebbf54645272-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "08eca88c-a4b4-461b-8568-ebbf54645272" (UID: "08eca88c-a4b4-461b-8568-ebbf54645272"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.706677 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08eca88c-a4b4-461b-8568-ebbf54645272-kube-api-access-wb5qb" (OuterVolumeSpecName: "kube-api-access-wb5qb") pod "08eca88c-a4b4-461b-8568-ebbf54645272" (UID: "08eca88c-a4b4-461b-8568-ebbf54645272"). InnerVolumeSpecName "kube-api-access-wb5qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.730227 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.777807 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-logs\") pod \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.778262 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-horizon-secret-key\") pod \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.778452 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-config-data\") pod \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.778496 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcbv5\" (UniqueName: \"kubernetes.io/projected/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-kube-api-access-pcbv5\") pod \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.778557 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-scripts\") pod \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\" (UID: \"c39b1dd9-9e73-4cca-aea6-e228f1ba5942\") " Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.779048 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08eca88c-a4b4-461b-8568-ebbf54645272-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.779061 4810 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/08eca88c-a4b4-461b-8568-ebbf54645272-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.779071 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb5qb\" (UniqueName: \"kubernetes.io/projected/08eca88c-a4b4-461b-8568-ebbf54645272-kube-api-access-wb5qb\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.780264 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-logs" (OuterVolumeSpecName: "logs") pod "c39b1dd9-9e73-4cca-aea6-e228f1ba5942" (UID: "c39b1dd9-9e73-4cca-aea6-e228f1ba5942"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.806821 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-kube-api-access-pcbv5" (OuterVolumeSpecName: "kube-api-access-pcbv5") pod "c39b1dd9-9e73-4cca-aea6-e228f1ba5942" (UID: "c39b1dd9-9e73-4cca-aea6-e228f1ba5942"). InnerVolumeSpecName "kube-api-access-pcbv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.813495 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c39b1dd9-9e73-4cca-aea6-e228f1ba5942" (UID: "c39b1dd9-9e73-4cca-aea6-e228f1ba5942"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.820844 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08eca88c-a4b4-461b-8568-ebbf54645272-scripts" (OuterVolumeSpecName: "scripts") pod "08eca88c-a4b4-461b-8568-ebbf54645272" (UID: "08eca88c-a4b4-461b-8568-ebbf54645272"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.840432 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-scripts" (OuterVolumeSpecName: "scripts") pod "c39b1dd9-9e73-4cca-aea6-e228f1ba5942" (UID: "c39b1dd9-9e73-4cca-aea6-e228f1ba5942"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.846722 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5f56498b8d-9gwmf" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.858533 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08eca88c-a4b4-461b-8568-ebbf54645272-config-data" (OuterVolumeSpecName: "config-data") pod "08eca88c-a4b4-461b-8568-ebbf54645272" (UID: "08eca88c-a4b4-461b-8568-ebbf54645272"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.882608 4810 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.882656 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08eca88c-a4b4-461b-8568-ebbf54645272-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.882666 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08eca88c-a4b4-461b-8568-ebbf54645272-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.882676 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcbv5\" (UniqueName: \"kubernetes.io/projected/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-kube-api-access-pcbv5\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.882687 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.882694 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.897395 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.927950 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-849c785789-5xrh2" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.929225 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-849c785789-5xrh2" event={"ID":"c39b1dd9-9e73-4cca-aea6-e228f1ba5942","Type":"ContainerDied","Data":"f7e61fd52ad6569907f8a84cbc32aef547486ffda35028466938ada8d5e3aa10"} Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.929316 4810 scope.go:117] "RemoveContainer" containerID="267e2a2aa6b67d8303b6404df33bfa4941b4f403604fb949cf5ec932e82ab1b7" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.936728 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-869f57798-ngdtl"] Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.940926 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-config-data" (OuterVolumeSpecName: "config-data") pod "c39b1dd9-9e73-4cca-aea6-e228f1ba5942" (UID: "c39b1dd9-9e73-4cca-aea6-e228f1ba5942"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.984431 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c39b1dd9-9e73-4cca-aea6-e228f1ba5942-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.993083 4810 generic.go:334] "Generic (PLEG): container finished" podID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerID="a8232fee767eaee3fa026223e20673e25fa6d954ddb913e148bf6e0e435de416" exitCode=1 Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.993180 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5","Type":"ContainerDied","Data":"a8232fee767eaee3fa026223e20673e25fa6d954ddb913e148bf6e0e435de416"} Feb 19 15:29:53 crc kubenswrapper[4810]: I0219 15:29:53.994431 4810 scope.go:117] "RemoveContainer" containerID="a8232fee767eaee3fa026223e20673e25fa6d954ddb913e148bf6e0e435de416" Feb 19 15:29:53 crc kubenswrapper[4810]: E0219 15:29:53.994910 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(3eb2dccd-c5dc-436f-b7a6-954af7bc51c5)\"" pod="openstack/watcher-decision-engine-0" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.033391 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-668f7d7fb5-l5kpq" event={"ID":"08eca88c-a4b4-461b-8568-ebbf54645272","Type":"ContainerDied","Data":"b936d9f6773d6dc856145fa11df03aba298425ab6b8a943cdd257e7a943da84e"} Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.033589 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-668f7d7fb5-l5kpq" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.045825 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" event={"ID":"4a48946e-058c-4395-bbad-5effb50b2228","Type":"ContainerStarted","Data":"a0f55d5dfd4c1951d245770b89cf22415d60ecb97cf8c05e857fc4583af61f68"} Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.080253 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-869f57798-ngdtl" podUID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerName="horizon-log" containerID="cri-o://b7473c6c07a1c77d67ffe62af3e5c262ab61dca816caf8aab0acb14dc5b23ebd" gracePeriod=30 Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.080435 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" event={"ID":"c008ffcd-bb96-47dd-a311-fdc58f6d8918","Type":"ContainerStarted","Data":"86e12660d14ec5ce962dfc203a055048d75b58212e920b0f18356f78c799be3a"} Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.080635 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-869f57798-ngdtl" podUID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerName="horizon" containerID="cri-o://5ab1bae28a55f588686fefd9b6e6ee98c22d6657796a662570ae5cd62319bd13" gracePeriod=30 Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.171389 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-668f7d7fb5-l5kpq"] Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.250096 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-668f7d7fb5-l5kpq"] Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.280515 4810 scope.go:117] "RemoveContainer" containerID="09ef2e9038e53c0d3694b2f3cb1b25543038065797746360e5b00060a157df5b" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.384014 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-849c785789-5xrh2"] Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.398281 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-849c785789-5xrh2"] Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.406038 4810 scope.go:117] "RemoveContainer" containerID="ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.530817 4810 scope.go:117] "RemoveContainer" containerID="11bb73290ec186744ef4e88375d87c281860032daa137cc8bd4779ab70117f2b" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.582865 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b886df68b-htd57"] Feb 19 15:29:54 crc kubenswrapper[4810]: E0219 15:29:54.583318 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39b1dd9-9e73-4cca-aea6-e228f1ba5942" containerName="horizon" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.583346 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39b1dd9-9e73-4cca-aea6-e228f1ba5942" containerName="horizon" Feb 19 15:29:54 crc kubenswrapper[4810]: E0219 15:29:54.583365 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39b1dd9-9e73-4cca-aea6-e228f1ba5942" containerName="horizon-log" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.583372 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39b1dd9-9e73-4cca-aea6-e228f1ba5942" containerName="horizon-log" Feb 19 15:29:54 crc kubenswrapper[4810]: E0219 15:29:54.583383 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08eca88c-a4b4-461b-8568-ebbf54645272" containerName="horizon-log" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.583389 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="08eca88c-a4b4-461b-8568-ebbf54645272" containerName="horizon-log" Feb 19 15:29:54 crc kubenswrapper[4810]: E0219 15:29:54.583404 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08eca88c-a4b4-461b-8568-ebbf54645272" containerName="horizon" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.583411 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="08eca88c-a4b4-461b-8568-ebbf54645272" containerName="horizon" Feb 19 15:29:54 crc kubenswrapper[4810]: E0219 15:29:54.583419 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f44651-e4eb-4cce-a493-9dd9b491b22a" containerName="init" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.583425 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f44651-e4eb-4cce-a493-9dd9b491b22a" containerName="init" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.583585 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="08eca88c-a4b4-461b-8568-ebbf54645272" containerName="horizon-log" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.583595 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39b1dd9-9e73-4cca-aea6-e228f1ba5942" containerName="horizon-log" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.583618 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="08eca88c-a4b4-461b-8568-ebbf54645272" containerName="horizon" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.583627 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39b1dd9-9e73-4cca-aea6-e228f1ba5942" containerName="horizon" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.583639 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f44651-e4eb-4cce-a493-9dd9b491b22a" containerName="init" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.584623 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.590067 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.590084 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.614421 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-config-data-custom\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.614482 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d391303-b5ee-4f63-8035-12f123f35e65-logs\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.614534 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt94n\" (UniqueName: \"kubernetes.io/projected/8d391303-b5ee-4f63-8035-12f123f35e65-kube-api-access-jt94n\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.614566 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-internal-tls-certs\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.615216 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-combined-ca-bundle\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.615271 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-public-tls-certs\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.615391 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-config-data\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.625011 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b886df68b-htd57"] Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.716870 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d391303-b5ee-4f63-8035-12f123f35e65-logs\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.716996 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt94n\" (UniqueName: \"kubernetes.io/projected/8d391303-b5ee-4f63-8035-12f123f35e65-kube-api-access-jt94n\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.717059 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-internal-tls-certs\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.717143 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-combined-ca-bundle\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.717164 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-public-tls-certs\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.717258 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-config-data\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.717308 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-config-data-custom\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.717517 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d391303-b5ee-4f63-8035-12f123f35e65-logs\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.726599 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-config-data-custom\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.728692 4810 scope.go:117] "RemoveContainer" containerID="cc512cef3a5d05465ad16d8e6be38874fe910640b0911d914b7146b795d387d4" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.729565 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-config-data\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.734308 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-public-tls-certs\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.737566 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-combined-ca-bundle\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.738929 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d391303-b5ee-4f63-8035-12f123f35e65-internal-tls-certs\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.747284 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt94n\" (UniqueName: \"kubernetes.io/projected/8d391303-b5ee-4f63-8035-12f123f35e65-kube-api-access-jt94n\") pod \"barbican-api-6b886df68b-htd57\" (UID: \"8d391303-b5ee-4f63-8035-12f123f35e65\") " pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:54 crc kubenswrapper[4810]: I0219 15:29:54.913828 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.124137 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f5db557a-0b89-4a02-b1b2-19bc205acee8","Type":"ContainerStarted","Data":"41f32629cee0c0ae9efe4c3d3ac560c20dce8a8e902c7f6325c5e1b2d1d8b995"} Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.129156 4810 generic.go:334] "Generic (PLEG): container finished" podID="4a48946e-058c-4395-bbad-5effb50b2228" containerID="a0f55d5dfd4c1951d245770b89cf22415d60ecb97cf8c05e857fc4583af61f68" exitCode=0 Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.129229 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" event={"ID":"4a48946e-058c-4395-bbad-5effb50b2228","Type":"ContainerDied","Data":"a0f55d5dfd4c1951d245770b89cf22415d60ecb97cf8c05e857fc4583af61f68"} Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.129701 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" event={"ID":"4a48946e-058c-4395-bbad-5effb50b2228","Type":"ContainerStarted","Data":"c249e977cefa0135aa004a3d9624e2b5787cc21f20239233e79602a670cf0acb"} Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.131794 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.168132 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58f8775989-n9rgr" event={"ID":"f277c31b-ff97-4f3b-aec3-c5cfe9293d60","Type":"ContainerStarted","Data":"9735b3da2b6a95af5e1e5f4aa3e9e29ae0064236f04c1a8806891de53deba27d"} Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.168192 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58f8775989-n9rgr" event={"ID":"f277c31b-ff97-4f3b-aec3-c5cfe9293d60","Type":"ContainerStarted","Data":"8c8893831d66e071f297f357257d2bf8c082ee53a2e2b3bc9428bbd0780f1134"} Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.180290 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" podStartSLOduration=6.180268984 podStartE2EDuration="6.180268984s" podCreationTimestamp="2026-02-19 15:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:55.156926211 +0000 UTC m=+1224.638956335" watchObservedRunningTime="2026-02-19 15:29:55.180268984 +0000 UTC m=+1224.662299108" Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.180782 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3bd9969-3750-460b-95cd-8c52d2e44d82","Type":"ContainerStarted","Data":"1088aad748bf16b0aa6aaa6ca8f83095344671dc633f1afff11c7c6c03c2f490"} Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.187406 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" event={"ID":"c008ffcd-bb96-47dd-a311-fdc58f6d8918","Type":"ContainerStarted","Data":"ac2646d5c8b3c7f1a0ce7400b5ee60d1b6c83b39d160cd582242a516f408c1bb"} Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.193637 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-58f8775989-n9rgr" podStartSLOduration=4.269471553 podStartE2EDuration="8.193622631s" podCreationTimestamp="2026-02-19 15:29:47 +0000 UTC" firstStartedPulling="2026-02-19 15:29:49.378695368 +0000 UTC m=+1218.860725492" lastFinishedPulling="2026-02-19 15:29:53.302846446 +0000 UTC m=+1222.784876570" observedRunningTime="2026-02-19 15:29:55.189796077 +0000 UTC m=+1224.671826201" watchObservedRunningTime="2026-02-19 15:29:55.193622631 +0000 UTC m=+1224.675652745" Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.219876 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-75f99f68b4-d7hj4" podStartSLOduration=4.346722758 podStartE2EDuration="8.219841524s" podCreationTimestamp="2026-02-19 15:29:47 +0000 UTC" firstStartedPulling="2026-02-19 15:29:49.378432031 +0000 UTC m=+1218.860462155" lastFinishedPulling="2026-02-19 15:29:53.251550787 +0000 UTC m=+1222.733580921" observedRunningTime="2026-02-19 15:29:55.207589014 +0000 UTC m=+1224.689619138" watchObservedRunningTime="2026-02-19 15:29:55.219841524 +0000 UTC m=+1224.701871648" Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.481976 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08eca88c-a4b4-461b-8568-ebbf54645272" path="/var/lib/kubelet/pods/08eca88c-a4b4-461b-8568-ebbf54645272/volumes" Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.482730 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c39b1dd9-9e73-4cca-aea6-e228f1ba5942" path="/var/lib/kubelet/pods/c39b1dd9-9e73-4cca-aea6-e228f1ba5942/volumes" Feb 19 15:29:55 crc kubenswrapper[4810]: I0219 15:29:55.506972 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b886df68b-htd57"] Feb 19 15:29:56 crc kubenswrapper[4810]: I0219 15:29:56.215045 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3bd9969-3750-460b-95cd-8c52d2e44d82","Type":"ContainerStarted","Data":"2a43f3753096df053129dc1d2079d46040848bff1f282b81bb80d958a8043c75"} Feb 19 15:29:56 crc kubenswrapper[4810]: I0219 15:29:56.217436 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f5db557a-0b89-4a02-b1b2-19bc205acee8","Type":"ContainerStarted","Data":"32d2af278e824de267fbeedcdd57bbf5777084a34214d092b327d9209c8d753a"} Feb 19 15:29:56 crc kubenswrapper[4810]: I0219 15:29:56.217558 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 15:29:56 crc kubenswrapper[4810]: I0219 15:29:56.217574 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f5db557a-0b89-4a02-b1b2-19bc205acee8" containerName="cinder-api-log" containerID="cri-o://41f32629cee0c0ae9efe4c3d3ac560c20dce8a8e902c7f6325c5e1b2d1d8b995" gracePeriod=30 Feb 19 15:29:56 crc kubenswrapper[4810]: I0219 15:29:56.217610 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f5db557a-0b89-4a02-b1b2-19bc205acee8" containerName="cinder-api" containerID="cri-o://32d2af278e824de267fbeedcdd57bbf5777084a34214d092b327d9209c8d753a" gracePeriod=30 Feb 19 15:29:56 crc kubenswrapper[4810]: I0219 15:29:56.226141 4810 generic.go:334] "Generic (PLEG): container finished" podID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerID="5ab1bae28a55f588686fefd9b6e6ee98c22d6657796a662570ae5cd62319bd13" exitCode=0 Feb 19 15:29:56 crc kubenswrapper[4810]: I0219 15:29:56.226230 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-869f57798-ngdtl" event={"ID":"58c845f2-0069-4ee5-9d4b-b5871e078926","Type":"ContainerDied","Data":"5ab1bae28a55f588686fefd9b6e6ee98c22d6657796a662570ae5cd62319bd13"} Feb 19 15:29:56 crc kubenswrapper[4810]: I0219 15:29:56.234865 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b886df68b-htd57" event={"ID":"8d391303-b5ee-4f63-8035-12f123f35e65","Type":"ContainerStarted","Data":"05c95f5046182804c63d2c91ebf19854d994a7b67e23dae465b4554b045ae33e"} Feb 19 15:29:56 crc kubenswrapper[4810]: I0219 15:29:56.234907 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b886df68b-htd57" event={"ID":"8d391303-b5ee-4f63-8035-12f123f35e65","Type":"ContainerStarted","Data":"50dda5b91868f3e5a95a6dd804fa072c5a73ba040b22717cef0334d8204978c9"} Feb 19 15:29:56 crc kubenswrapper[4810]: I0219 15:29:56.234921 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b886df68b-htd57" event={"ID":"8d391303-b5ee-4f63-8035-12f123f35e65","Type":"ContainerStarted","Data":"46fac789a8550e65b9a76f431a80e79197eed3298754c9cf7500b7e0a945007e"} Feb 19 15:29:56 crc kubenswrapper[4810]: I0219 15:29:56.263173 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.025992918 podStartE2EDuration="7.263153858s" podCreationTimestamp="2026-02-19 15:29:49 +0000 UTC" firstStartedPulling="2026-02-19 15:29:52.111355676 +0000 UTC m=+1221.593385800" lastFinishedPulling="2026-02-19 15:29:53.348516626 +0000 UTC m=+1222.830546740" observedRunningTime="2026-02-19 15:29:56.256358032 +0000 UTC m=+1225.738388156" watchObservedRunningTime="2026-02-19 15:29:56.263153858 +0000 UTC m=+1225.745183982" Feb 19 15:29:56 crc kubenswrapper[4810]: I0219 15:29:56.284979 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b886df68b-htd57" podStartSLOduration=2.2849578729999998 podStartE2EDuration="2.284957873s" podCreationTimestamp="2026-02-19 15:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:56.279901239 +0000 UTC m=+1225.761931363" watchObservedRunningTime="2026-02-19 15:29:56.284957873 +0000 UTC m=+1225.766987997" Feb 19 15:29:56 crc kubenswrapper[4810]: I0219 15:29:56.310460 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.310441609 podStartE2EDuration="6.310441609s" podCreationTimestamp="2026-02-19 15:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:29:56.30073366 +0000 UTC m=+1225.782763784" watchObservedRunningTime="2026-02-19 15:29:56.310441609 +0000 UTC m=+1225.792471733" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.281696 4810 generic.go:334] "Generic (PLEG): container finished" podID="f5db557a-0b89-4a02-b1b2-19bc205acee8" containerID="32d2af278e824de267fbeedcdd57bbf5777084a34214d092b327d9209c8d753a" exitCode=0 Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.282275 4810 generic.go:334] "Generic (PLEG): container finished" podID="f5db557a-0b89-4a02-b1b2-19bc205acee8" containerID="41f32629cee0c0ae9efe4c3d3ac560c20dce8a8e902c7f6325c5e1b2d1d8b995" exitCode=143 Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.283685 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f5db557a-0b89-4a02-b1b2-19bc205acee8","Type":"ContainerDied","Data":"32d2af278e824de267fbeedcdd57bbf5777084a34214d092b327d9209c8d753a"} Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.283721 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f5db557a-0b89-4a02-b1b2-19bc205acee8","Type":"ContainerDied","Data":"41f32629cee0c0ae9efe4c3d3ac560c20dce8a8e902c7f6325c5e1b2d1d8b995"} Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.285264 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.285307 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.629677 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.685362 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-combined-ca-bundle\") pod \"f5db557a-0b89-4a02-b1b2-19bc205acee8\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.685851 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f5db557a-0b89-4a02-b1b2-19bc205acee8-etc-machine-id\") pod \"f5db557a-0b89-4a02-b1b2-19bc205acee8\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.685904 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-config-data-custom\") pod \"f5db557a-0b89-4a02-b1b2-19bc205acee8\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.685928 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5db557a-0b89-4a02-b1b2-19bc205acee8-logs\") pod \"f5db557a-0b89-4a02-b1b2-19bc205acee8\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.685974 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-scripts\") pod \"f5db557a-0b89-4a02-b1b2-19bc205acee8\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.686022 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-config-data\") pod \"f5db557a-0b89-4a02-b1b2-19bc205acee8\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.686055 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7vrh\" (UniqueName: \"kubernetes.io/projected/f5db557a-0b89-4a02-b1b2-19bc205acee8-kube-api-access-s7vrh\") pod \"f5db557a-0b89-4a02-b1b2-19bc205acee8\" (UID: \"f5db557a-0b89-4a02-b1b2-19bc205acee8\") " Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.686401 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5db557a-0b89-4a02-b1b2-19bc205acee8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f5db557a-0b89-4a02-b1b2-19bc205acee8" (UID: "f5db557a-0b89-4a02-b1b2-19bc205acee8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.686438 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5db557a-0b89-4a02-b1b2-19bc205acee8-logs" (OuterVolumeSpecName: "logs") pod "f5db557a-0b89-4a02-b1b2-19bc205acee8" (UID: "f5db557a-0b89-4a02-b1b2-19bc205acee8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.686712 4810 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f5db557a-0b89-4a02-b1b2-19bc205acee8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.686732 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5db557a-0b89-4a02-b1b2-19bc205acee8-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.691861 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5db557a-0b89-4a02-b1b2-19bc205acee8-kube-api-access-s7vrh" (OuterVolumeSpecName: "kube-api-access-s7vrh") pod "f5db557a-0b89-4a02-b1b2-19bc205acee8" (UID: "f5db557a-0b89-4a02-b1b2-19bc205acee8"). InnerVolumeSpecName "kube-api-access-s7vrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.695676 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-scripts" (OuterVolumeSpecName: "scripts") pod "f5db557a-0b89-4a02-b1b2-19bc205acee8" (UID: "f5db557a-0b89-4a02-b1b2-19bc205acee8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.701155 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f5db557a-0b89-4a02-b1b2-19bc205acee8" (UID: "f5db557a-0b89-4a02-b1b2-19bc205acee8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.740624 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5db557a-0b89-4a02-b1b2-19bc205acee8" (UID: "f5db557a-0b89-4a02-b1b2-19bc205acee8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.758140 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-config-data" (OuterVolumeSpecName: "config-data") pod "f5db557a-0b89-4a02-b1b2-19bc205acee8" (UID: "f5db557a-0b89-4a02-b1b2-19bc205acee8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.788485 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.788515 4810 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.788525 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.788544 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5db557a-0b89-4a02-b1b2-19bc205acee8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:57 crc kubenswrapper[4810]: I0219 15:29:57.788553 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7vrh\" (UniqueName: \"kubernetes.io/projected/f5db557a-0b89-4a02-b1b2-19bc205acee8-kube-api-access-s7vrh\") on node \"crc\" DevicePath \"\"" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.162670 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-869f57798-ngdtl" podUID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.294999 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f5db557a-0b89-4a02-b1b2-19bc205acee8","Type":"ContainerDied","Data":"cb5c2099652d015df95731d45206e3f87ff0597889d9ca17b5c94ba96cec083c"} Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.295060 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.295067 4810 scope.go:117] "RemoveContainer" containerID="32d2af278e824de267fbeedcdd57bbf5777084a34214d092b327d9209c8d753a" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.333374 4810 scope.go:117] "RemoveContainer" containerID="41f32629cee0c0ae9efe4c3d3ac560c20dce8a8e902c7f6325c5e1b2d1d8b995" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.340463 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.359068 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.374123 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 15:29:58 crc kubenswrapper[4810]: E0219 15:29:58.374580 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5db557a-0b89-4a02-b1b2-19bc205acee8" containerName="cinder-api" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.374597 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5db557a-0b89-4a02-b1b2-19bc205acee8" containerName="cinder-api" Feb 19 15:29:58 crc kubenswrapper[4810]: E0219 15:29:58.374614 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5db557a-0b89-4a02-b1b2-19bc205acee8" containerName="cinder-api-log" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.374624 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5db557a-0b89-4a02-b1b2-19bc205acee8" containerName="cinder-api-log" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.374844 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5db557a-0b89-4a02-b1b2-19bc205acee8" containerName="cinder-api-log" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.374866 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5db557a-0b89-4a02-b1b2-19bc205acee8" containerName="cinder-api" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.375881 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.378378 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.378595 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.378717 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.387787 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.403682 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-config-data\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.403752 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.403789 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twsrv\" (UniqueName: \"kubernetes.io/projected/1723b820-73ac-49f3-8716-283bf2c05925-kube-api-access-twsrv\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.403822 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1723b820-73ac-49f3-8716-283bf2c05925-logs\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.403864 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-scripts\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.403979 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.404087 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.408657 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-config-data-custom\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.408871 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1723b820-73ac-49f3-8716-283bf2c05925-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.510906 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.511246 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-config-data-custom\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.511306 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1723b820-73ac-49f3-8716-283bf2c05925-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.511359 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-config-data\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.511385 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.511405 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twsrv\" (UniqueName: \"kubernetes.io/projected/1723b820-73ac-49f3-8716-283bf2c05925-kube-api-access-twsrv\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.511426 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1723b820-73ac-49f3-8716-283bf2c05925-logs\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.511449 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-scripts\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.511493 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.511598 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1723b820-73ac-49f3-8716-283bf2c05925-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.512047 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1723b820-73ac-49f3-8716-283bf2c05925-logs\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.519002 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.520729 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.523167 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-config-data-custom\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.528273 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.531088 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-scripts\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.542732 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twsrv\" (UniqueName: \"kubernetes.io/projected/1723b820-73ac-49f3-8716-283bf2c05925-kube-api-access-twsrv\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.543748 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1723b820-73ac-49f3-8716-283bf2c05925-config-data\") pod \"cinder-api-0\" (UID: \"1723b820-73ac-49f3-8716-283bf2c05925\") " pod="openstack/cinder-api-0" Feb 19 15:29:58 crc kubenswrapper[4810]: I0219 15:29:58.712080 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 15:29:59 crc kubenswrapper[4810]: I0219 15:29:59.124991 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 15:29:59 crc kubenswrapper[4810]: I0219 15:29:59.125402 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 15:29:59 crc kubenswrapper[4810]: I0219 15:29:59.126128 4810 scope.go:117] "RemoveContainer" containerID="a8232fee767eaee3fa026223e20673e25fa6d954ddb913e148bf6e0e435de416" Feb 19 15:29:59 crc kubenswrapper[4810]: E0219 15:29:59.126594 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(3eb2dccd-c5dc-436f-b7a6-954af7bc51c5)\"" pod="openstack/watcher-decision-engine-0" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" Feb 19 15:29:59 crc kubenswrapper[4810]: I0219 15:29:59.209004 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 15:29:59 crc kubenswrapper[4810]: I0219 15:29:59.293020 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:59 crc kubenswrapper[4810]: I0219 15:29:59.331318 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1723b820-73ac-49f3-8716-283bf2c05925","Type":"ContainerStarted","Data":"b04502868b0924f6e769d8268d1b745040c99463576aa2faf7bef62c805bcf9a"} Feb 19 15:29:59 crc kubenswrapper[4810]: I0219 15:29:59.343703 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:29:59 crc kubenswrapper[4810]: I0219 15:29:59.459494 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5db557a-0b89-4a02-b1b2-19bc205acee8" path="/var/lib/kubelet/pods/f5db557a-0b89-4a02-b1b2-19bc205acee8/volumes" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.152165 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78"] Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.155062 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.157917 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.158453 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.205564 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78"] Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.281963 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40500a46-a16b-4282-86e4-1d99277d7c7a-secret-volume\") pod \"collect-profiles-29525250-rls78\" (UID: \"40500a46-a16b-4282-86e4-1d99277d7c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.282075 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40500a46-a16b-4282-86e4-1d99277d7c7a-config-volume\") pod \"collect-profiles-29525250-rls78\" (UID: \"40500a46-a16b-4282-86e4-1d99277d7c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.282136 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzchp\" (UniqueName: \"kubernetes.io/projected/40500a46-a16b-4282-86e4-1d99277d7c7a-kube-api-access-kzchp\") pod \"collect-profiles-29525250-rls78\" (UID: \"40500a46-a16b-4282-86e4-1d99277d7c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.351881 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1723b820-73ac-49f3-8716-283bf2c05925","Type":"ContainerStarted","Data":"7f71476a89f9d71d7cfac23d4d3704a81b53747a76723ed5636896cdb0c1f8fc"} Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.383839 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40500a46-a16b-4282-86e4-1d99277d7c7a-config-volume\") pod \"collect-profiles-29525250-rls78\" (UID: \"40500a46-a16b-4282-86e4-1d99277d7c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.383918 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzchp\" (UniqueName: \"kubernetes.io/projected/40500a46-a16b-4282-86e4-1d99277d7c7a-kube-api-access-kzchp\") pod \"collect-profiles-29525250-rls78\" (UID: \"40500a46-a16b-4282-86e4-1d99277d7c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.383997 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40500a46-a16b-4282-86e4-1d99277d7c7a-secret-volume\") pod \"collect-profiles-29525250-rls78\" (UID: \"40500a46-a16b-4282-86e4-1d99277d7c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.385457 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40500a46-a16b-4282-86e4-1d99277d7c7a-config-volume\") pod \"collect-profiles-29525250-rls78\" (UID: \"40500a46-a16b-4282-86e4-1d99277d7c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.389977 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40500a46-a16b-4282-86e4-1d99277d7c7a-secret-volume\") pod \"collect-profiles-29525250-rls78\" (UID: \"40500a46-a16b-4282-86e4-1d99277d7c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.402917 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.404307 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzchp\" (UniqueName: \"kubernetes.io/projected/40500a46-a16b-4282-86e4-1d99277d7c7a-kube-api-access-kzchp\") pod \"collect-profiles-29525250-rls78\" (UID: \"40500a46-a16b-4282-86e4-1d99277d7c7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.468476 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.524767 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.527195 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf5f86dff-7482l"] Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.527440 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" podUID="4f9534ee-827a-49fb-8588-a5a8a494be3c" containerName="dnsmasq-dns" containerID="cri-o://f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963" gracePeriod=10 Feb 19 15:30:00 crc kubenswrapper[4810]: I0219 15:30:00.654373 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.171148 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.212669 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78"] Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.311938 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhf9w\" (UniqueName: \"kubernetes.io/projected/4f9534ee-827a-49fb-8588-a5a8a494be3c-kube-api-access-vhf9w\") pod \"4f9534ee-827a-49fb-8588-a5a8a494be3c\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.312050 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-config\") pod \"4f9534ee-827a-49fb-8588-a5a8a494be3c\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.312097 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-dns-svc\") pod \"4f9534ee-827a-49fb-8588-a5a8a494be3c\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.312162 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-dns-swift-storage-0\") pod \"4f9534ee-827a-49fb-8588-a5a8a494be3c\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.312209 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-ovsdbserver-nb\") pod \"4f9534ee-827a-49fb-8588-a5a8a494be3c\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.312272 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-ovsdbserver-sb\") pod \"4f9534ee-827a-49fb-8588-a5a8a494be3c\" (UID: \"4f9534ee-827a-49fb-8588-a5a8a494be3c\") " Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.405594 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f9534ee-827a-49fb-8588-a5a8a494be3c-kube-api-access-vhf9w" (OuterVolumeSpecName: "kube-api-access-vhf9w") pod "4f9534ee-827a-49fb-8588-a5a8a494be3c" (UID: "4f9534ee-827a-49fb-8588-a5a8a494be3c"). InnerVolumeSpecName "kube-api-access-vhf9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.414516 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhf9w\" (UniqueName: \"kubernetes.io/projected/4f9534ee-827a-49fb-8588-a5a8a494be3c-kube-api-access-vhf9w\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.414930 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" event={"ID":"40500a46-a16b-4282-86e4-1d99277d7c7a","Type":"ContainerStarted","Data":"679424eab96e65817895723298fd29af9f44246bfd841f6fc36de1e432cd3878"} Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.428158 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-config" (OuterVolumeSpecName: "config") pod "4f9534ee-827a-49fb-8588-a5a8a494be3c" (UID: "4f9534ee-827a-49fb-8588-a5a8a494be3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.428298 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1723b820-73ac-49f3-8716-283bf2c05925","Type":"ContainerStarted","Data":"d54f1f3426e34264224ad10ed68b4ec9ced57d19030414035e1145729b72fcf1"} Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.429704 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.434541 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f9534ee-827a-49fb-8588-a5a8a494be3c" (UID: "4f9534ee-827a-49fb-8588-a5a8a494be3c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.449516 4810 generic.go:334] "Generic (PLEG): container finished" podID="4f9534ee-827a-49fb-8588-a5a8a494be3c" containerID="f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963" exitCode=0 Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.454254 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.473028 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.473010668 podStartE2EDuration="3.473010668s" podCreationTimestamp="2026-02-19 15:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:30:01.455810526 +0000 UTC m=+1230.937840660" watchObservedRunningTime="2026-02-19 15:30:01.473010668 +0000 UTC m=+1230.955040792" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.489751 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" event={"ID":"4f9534ee-827a-49fb-8588-a5a8a494be3c","Type":"ContainerDied","Data":"f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963"} Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.489793 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf5f86dff-7482l" event={"ID":"4f9534ee-827a-49fb-8588-a5a8a494be3c","Type":"ContainerDied","Data":"32fbc60f0dab31f943ed4d231a260fb910bacf46dc6517badaa0cc870b972e03"} Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.489810 4810 scope.go:117] "RemoveContainer" containerID="f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.516524 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.516724 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.522381 4810 scope.go:117] "RemoveContainer" containerID="77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.537684 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.556443 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4f9534ee-827a-49fb-8588-a5a8a494be3c" (UID: "4f9534ee-827a-49fb-8588-a5a8a494be3c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.570538 4810 scope.go:117] "RemoveContainer" containerID="f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.570658 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4f9534ee-827a-49fb-8588-a5a8a494be3c" (UID: "4f9534ee-827a-49fb-8588-a5a8a494be3c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.570788 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4f9534ee-827a-49fb-8588-a5a8a494be3c" (UID: "4f9534ee-827a-49fb-8588-a5a8a494be3c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:01 crc kubenswrapper[4810]: E0219 15:30:01.571901 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963\": container with ID starting with f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963 not found: ID does not exist" containerID="f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.571948 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963"} err="failed to get container status \"f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963\": rpc error: code = NotFound desc = could not find container \"f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963\": container with ID starting with f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963 not found: ID does not exist" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.571979 4810 scope.go:117] "RemoveContainer" containerID="77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095" Feb 19 15:30:01 crc kubenswrapper[4810]: E0219 15:30:01.572805 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095\": container with ID starting with 77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095 not found: ID does not exist" containerID="77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.572846 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095"} err="failed to get container status \"77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095\": rpc error: code = NotFound desc = could not find container \"77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095\": container with ID starting with 77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095 not found: ID does not exist" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.618189 4810 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.618475 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.618486 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f9534ee-827a-49fb-8588-a5a8a494be3c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.786352 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf5f86dff-7482l"] Feb 19 15:30:01 crc kubenswrapper[4810]: I0219 15:30:01.794331 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cf5f86dff-7482l"] Feb 19 15:30:02 crc kubenswrapper[4810]: I0219 15:30:02.460841 4810 generic.go:334] "Generic (PLEG): container finished" podID="40500a46-a16b-4282-86e4-1d99277d7c7a" containerID="b50c5c7b6b7301c6c8992f22517d1d5b4a8f3065b75aef28f8faaa61ad94fd27" exitCode=0 Feb 19 15:30:02 crc kubenswrapper[4810]: I0219 15:30:02.461021 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e3bd9969-3750-460b-95cd-8c52d2e44d82" containerName="cinder-scheduler" containerID="cri-o://1088aad748bf16b0aa6aaa6ca8f83095344671dc633f1afff11c7c6c03c2f490" gracePeriod=30 Feb 19 15:30:02 crc kubenswrapper[4810]: I0219 15:30:02.461087 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" event={"ID":"40500a46-a16b-4282-86e4-1d99277d7c7a","Type":"ContainerDied","Data":"b50c5c7b6b7301c6c8992f22517d1d5b4a8f3065b75aef28f8faaa61ad94fd27"} Feb 19 15:30:02 crc kubenswrapper[4810]: I0219 15:30:02.462113 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e3bd9969-3750-460b-95cd-8c52d2e44d82" containerName="probe" containerID="cri-o://2a43f3753096df053129dc1d2079d46040848bff1f282b81bb80d958a8043c75" gracePeriod=30 Feb 19 15:30:03 crc kubenswrapper[4810]: I0219 15:30:03.450873 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f9534ee-827a-49fb-8588-a5a8a494be3c" path="/var/lib/kubelet/pods/4f9534ee-827a-49fb-8588-a5a8a494be3c/volumes" Feb 19 15:30:03 crc kubenswrapper[4810]: I0219 15:30:03.474210 4810 generic.go:334] "Generic (PLEG): container finished" podID="e3bd9969-3750-460b-95cd-8c52d2e44d82" containerID="2a43f3753096df053129dc1d2079d46040848bff1f282b81bb80d958a8043c75" exitCode=0 Feb 19 15:30:03 crc kubenswrapper[4810]: I0219 15:30:03.474438 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3bd9969-3750-460b-95cd-8c52d2e44d82","Type":"ContainerDied","Data":"2a43f3753096df053129dc1d2079d46040848bff1f282b81bb80d958a8043c75"} Feb 19 15:30:03 crc kubenswrapper[4810]: I0219 15:30:03.848148 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-695bb7cdc6-72zs2" podUID="f0b73197-3c7e-44c3-8a49-35d9e0a40629" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.174:9696/\": dial tcp 10.217.0.174:9696: connect: connection refused" Feb 19 15:30:03 crc kubenswrapper[4810]: I0219 15:30:03.853981 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" Feb 19 15:30:03 crc kubenswrapper[4810]: I0219 15:30:03.961744 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40500a46-a16b-4282-86e4-1d99277d7c7a-config-volume\") pod \"40500a46-a16b-4282-86e4-1d99277d7c7a\" (UID: \"40500a46-a16b-4282-86e4-1d99277d7c7a\") " Feb 19 15:30:03 crc kubenswrapper[4810]: I0219 15:30:03.961972 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40500a46-a16b-4282-86e4-1d99277d7c7a-secret-volume\") pod \"40500a46-a16b-4282-86e4-1d99277d7c7a\" (UID: \"40500a46-a16b-4282-86e4-1d99277d7c7a\") " Feb 19 15:30:03 crc kubenswrapper[4810]: I0219 15:30:03.962247 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzchp\" (UniqueName: \"kubernetes.io/projected/40500a46-a16b-4282-86e4-1d99277d7c7a-kube-api-access-kzchp\") pod \"40500a46-a16b-4282-86e4-1d99277d7c7a\" (UID: \"40500a46-a16b-4282-86e4-1d99277d7c7a\") " Feb 19 15:30:03 crc kubenswrapper[4810]: I0219 15:30:03.963148 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40500a46-a16b-4282-86e4-1d99277d7c7a-config-volume" (OuterVolumeSpecName: "config-volume") pod "40500a46-a16b-4282-86e4-1d99277d7c7a" (UID: "40500a46-a16b-4282-86e4-1d99277d7c7a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:03 crc kubenswrapper[4810]: I0219 15:30:03.973525 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40500a46-a16b-4282-86e4-1d99277d7c7a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "40500a46-a16b-4282-86e4-1d99277d7c7a" (UID: "40500a46-a16b-4282-86e4-1d99277d7c7a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:03 crc kubenswrapper[4810]: I0219 15:30:03.973575 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40500a46-a16b-4282-86e4-1d99277d7c7a-kube-api-access-kzchp" (OuterVolumeSpecName: "kube-api-access-kzchp") pod "40500a46-a16b-4282-86e4-1d99277d7c7a" (UID: "40500a46-a16b-4282-86e4-1d99277d7c7a"). InnerVolumeSpecName "kube-api-access-kzchp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:04 crc kubenswrapper[4810]: I0219 15:30:04.064043 4810 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40500a46-a16b-4282-86e4-1d99277d7c7a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:04 crc kubenswrapper[4810]: I0219 15:30:04.064083 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzchp\" (UniqueName: \"kubernetes.io/projected/40500a46-a16b-4282-86e4-1d99277d7c7a-kube-api-access-kzchp\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:04 crc kubenswrapper[4810]: I0219 15:30:04.064097 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40500a46-a16b-4282-86e4-1d99277d7c7a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:04 crc kubenswrapper[4810]: I0219 15:30:04.266276 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:30:04 crc kubenswrapper[4810]: I0219 15:30:04.489608 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" event={"ID":"40500a46-a16b-4282-86e4-1d99277d7c7a","Type":"ContainerDied","Data":"679424eab96e65817895723298fd29af9f44246bfd841f6fc36de1e432cd3878"} Feb 19 15:30:04 crc kubenswrapper[4810]: I0219 15:30:04.489649 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="679424eab96e65817895723298fd29af9f44246bfd841f6fc36de1e432cd3878" Feb 19 15:30:04 crc kubenswrapper[4810]: I0219 15:30:04.489726 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78" Feb 19 15:30:05 crc kubenswrapper[4810]: I0219 15:30:05.833104 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:30:05 crc kubenswrapper[4810]: I0219 15:30:05.835277 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:30:05 crc kubenswrapper[4810]: I0219 15:30:05.840353 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:30:06 crc kubenswrapper[4810]: I0219 15:30:06.543071 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:30:06 crc kubenswrapper[4810]: I0219 15:30:06.736172 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b886df68b-htd57" Feb 19 15:30:06 crc kubenswrapper[4810]: I0219 15:30:06.822082 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-79fc56bc44-tfjh4"] Feb 19 15:30:06 crc kubenswrapper[4810]: I0219 15:30:06.822775 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-79fc56bc44-tfjh4" podUID="faf01cf3-b74b-46d8-b589-05ea0195ac24" containerName="barbican-api-log" containerID="cri-o://6c0cc91325e6d2565c8ddd9b1eede9d65e56a0efbeee132cbd5a27f02f0ed7ef" gracePeriod=30 Feb 19 15:30:06 crc kubenswrapper[4810]: I0219 15:30:06.822912 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-79fc56bc44-tfjh4" podUID="faf01cf3-b74b-46d8-b589-05ea0195ac24" containerName="barbican-api" containerID="cri-o://800222f8de7a26806971959afe00821e5af0eb096530f9abfdb9f64bd62612cb" gracePeriod=30 Feb 19 15:30:06 crc kubenswrapper[4810]: I0219 15:30:06.865672 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7555d68ddd-xqj8c" Feb 19 15:30:06 crc kubenswrapper[4810]: I0219 15:30:06.941528 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-68ff886dc8-nntj6"] Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.301742 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6cd8bf58f4-ktsjk" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.515128 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.538516 4810 generic.go:334] "Generic (PLEG): container finished" podID="e3bd9969-3750-460b-95cd-8c52d2e44d82" containerID="1088aad748bf16b0aa6aaa6ca8f83095344671dc633f1afff11c7c6c03c2f490" exitCode=0 Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.538599 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3bd9969-3750-460b-95cd-8c52d2e44d82","Type":"ContainerDied","Data":"1088aad748bf16b0aa6aaa6ca8f83095344671dc633f1afff11c7c6c03c2f490"} Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.538630 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3bd9969-3750-460b-95cd-8c52d2e44d82","Type":"ContainerDied","Data":"3b1b3f3010d8c20d2061253491ade333f69730534532a92038800b9cbbc0aede"} Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.538647 4810 scope.go:117] "RemoveContainer" containerID="2a43f3753096df053129dc1d2079d46040848bff1f282b81bb80d958a8043c75" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.538771 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.559599 4810 generic.go:334] "Generic (PLEG): container finished" podID="faf01cf3-b74b-46d8-b589-05ea0195ac24" containerID="6c0cc91325e6d2565c8ddd9b1eede9d65e56a0efbeee132cbd5a27f02f0ed7ef" exitCode=143 Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.559841 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-68ff886dc8-nntj6" podUID="e0116ca5-826e-4a77-bc6f-11e89c047af8" containerName="placement-log" containerID="cri-o://0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f" gracePeriod=30 Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.560191 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-68ff886dc8-nntj6" podUID="e0116ca5-826e-4a77-bc6f-11e89c047af8" containerName="placement-api" containerID="cri-o://4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3" gracePeriod=30 Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.560256 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fc56bc44-tfjh4" event={"ID":"faf01cf3-b74b-46d8-b589-05ea0195ac24","Type":"ContainerDied","Data":"6c0cc91325e6d2565c8ddd9b1eede9d65e56a0efbeee132cbd5a27f02f0ed7ef"} Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.601582 4810 scope.go:117] "RemoveContainer" containerID="1088aad748bf16b0aa6aaa6ca8f83095344671dc633f1afff11c7c6c03c2f490" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.627550 4810 scope.go:117] "RemoveContainer" containerID="2a43f3753096df053129dc1d2079d46040848bff1f282b81bb80d958a8043c75" Feb 19 15:30:07 crc kubenswrapper[4810]: E0219 15:30:07.628266 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a43f3753096df053129dc1d2079d46040848bff1f282b81bb80d958a8043c75\": container with ID starting with 2a43f3753096df053129dc1d2079d46040848bff1f282b81bb80d958a8043c75 not found: ID does not exist" containerID="2a43f3753096df053129dc1d2079d46040848bff1f282b81bb80d958a8043c75" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.628320 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a43f3753096df053129dc1d2079d46040848bff1f282b81bb80d958a8043c75"} err="failed to get container status \"2a43f3753096df053129dc1d2079d46040848bff1f282b81bb80d958a8043c75\": rpc error: code = NotFound desc = could not find container \"2a43f3753096df053129dc1d2079d46040848bff1f282b81bb80d958a8043c75\": container with ID starting with 2a43f3753096df053129dc1d2079d46040848bff1f282b81bb80d958a8043c75 not found: ID does not exist" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.628419 4810 scope.go:117] "RemoveContainer" containerID="1088aad748bf16b0aa6aaa6ca8f83095344671dc633f1afff11c7c6c03c2f490" Feb 19 15:30:07 crc kubenswrapper[4810]: E0219 15:30:07.631584 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1088aad748bf16b0aa6aaa6ca8f83095344671dc633f1afff11c7c6c03c2f490\": container with ID starting with 1088aad748bf16b0aa6aaa6ca8f83095344671dc633f1afff11c7c6c03c2f490 not found: ID does not exist" containerID="1088aad748bf16b0aa6aaa6ca8f83095344671dc633f1afff11c7c6c03c2f490" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.631629 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1088aad748bf16b0aa6aaa6ca8f83095344671dc633f1afff11c7c6c03c2f490"} err="failed to get container status \"1088aad748bf16b0aa6aaa6ca8f83095344671dc633f1afff11c7c6c03c2f490\": rpc error: code = NotFound desc = could not find container \"1088aad748bf16b0aa6aaa6ca8f83095344671dc633f1afff11c7c6c03c2f490\": container with ID starting with 1088aad748bf16b0aa6aaa6ca8f83095344671dc633f1afff11c7c6c03c2f490 not found: ID does not exist" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.641117 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-scripts\") pod \"e3bd9969-3750-460b-95cd-8c52d2e44d82\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.641203 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-config-data-custom\") pod \"e3bd9969-3750-460b-95cd-8c52d2e44d82\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.641225 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3bd9969-3750-460b-95cd-8c52d2e44d82-etc-machine-id\") pod \"e3bd9969-3750-460b-95cd-8c52d2e44d82\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.641342 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-combined-ca-bundle\") pod \"e3bd9969-3750-460b-95cd-8c52d2e44d82\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.641686 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3bd9969-3750-460b-95cd-8c52d2e44d82-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e3bd9969-3750-460b-95cd-8c52d2e44d82" (UID: "e3bd9969-3750-460b-95cd-8c52d2e44d82"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.641765 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-config-data\") pod \"e3bd9969-3750-460b-95cd-8c52d2e44d82\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.641812 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw97l\" (UniqueName: \"kubernetes.io/projected/e3bd9969-3750-460b-95cd-8c52d2e44d82-kube-api-access-xw97l\") pod \"e3bd9969-3750-460b-95cd-8c52d2e44d82\" (UID: \"e3bd9969-3750-460b-95cd-8c52d2e44d82\") " Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.642341 4810 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3bd9969-3750-460b-95cd-8c52d2e44d82-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.657612 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3bd9969-3750-460b-95cd-8c52d2e44d82-kube-api-access-xw97l" (OuterVolumeSpecName: "kube-api-access-xw97l") pod "e3bd9969-3750-460b-95cd-8c52d2e44d82" (UID: "e3bd9969-3750-460b-95cd-8c52d2e44d82"). InnerVolumeSpecName "kube-api-access-xw97l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.657702 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-scripts" (OuterVolumeSpecName: "scripts") pod "e3bd9969-3750-460b-95cd-8c52d2e44d82" (UID: "e3bd9969-3750-460b-95cd-8c52d2e44d82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.691605 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e3bd9969-3750-460b-95cd-8c52d2e44d82" (UID: "e3bd9969-3750-460b-95cd-8c52d2e44d82"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.733679 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3bd9969-3750-460b-95cd-8c52d2e44d82" (UID: "e3bd9969-3750-460b-95cd-8c52d2e44d82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.743729 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6dfcf65577-bd5w2" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.744736 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw97l\" (UniqueName: \"kubernetes.io/projected/e3bd9969-3750-460b-95cd-8c52d2e44d82-kube-api-access-xw97l\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.744758 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.744767 4810 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.744778 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.807655 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cc67d5fc8-hs8lf"] Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.807903 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cc67d5fc8-hs8lf" podUID="fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" containerName="neutron-api" containerID="cri-o://4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737" gracePeriod=30 Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.809803 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cc67d5fc8-hs8lf" podUID="fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" containerName="neutron-httpd" containerID="cri-o://ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f" gracePeriod=30 Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.850906 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-config-data" (OuterVolumeSpecName: "config-data") pod "e3bd9969-3750-460b-95cd-8c52d2e44d82" (UID: "e3bd9969-3750-460b-95cd-8c52d2e44d82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:07 crc kubenswrapper[4810]: I0219 15:30:07.951516 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3bd9969-3750-460b-95cd-8c52d2e44d82-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.162357 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-869f57798-ngdtl" podUID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.173452 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.192375 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.208791 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 15:30:08 crc kubenswrapper[4810]: E0219 15:30:08.209229 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40500a46-a16b-4282-86e4-1d99277d7c7a" containerName="collect-profiles" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.209245 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="40500a46-a16b-4282-86e4-1d99277d7c7a" containerName="collect-profiles" Feb 19 15:30:08 crc kubenswrapper[4810]: E0219 15:30:08.209261 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bd9969-3750-460b-95cd-8c52d2e44d82" containerName="cinder-scheduler" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.209267 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bd9969-3750-460b-95cd-8c52d2e44d82" containerName="cinder-scheduler" Feb 19 15:30:08 crc kubenswrapper[4810]: E0219 15:30:08.209283 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f9534ee-827a-49fb-8588-a5a8a494be3c" containerName="dnsmasq-dns" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.209289 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f9534ee-827a-49fb-8588-a5a8a494be3c" containerName="dnsmasq-dns" Feb 19 15:30:08 crc kubenswrapper[4810]: E0219 15:30:08.209300 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bd9969-3750-460b-95cd-8c52d2e44d82" containerName="probe" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.209306 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bd9969-3750-460b-95cd-8c52d2e44d82" containerName="probe" Feb 19 15:30:08 crc kubenswrapper[4810]: E0219 15:30:08.209315 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f9534ee-827a-49fb-8588-a5a8a494be3c" containerName="init" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.209335 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f9534ee-827a-49fb-8588-a5a8a494be3c" containerName="init" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.209493 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3bd9969-3750-460b-95cd-8c52d2e44d82" containerName="probe" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.209512 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f9534ee-827a-49fb-8588-a5a8a494be3c" containerName="dnsmasq-dns" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.209525 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3bd9969-3750-460b-95cd-8c52d2e44d82" containerName="cinder-scheduler" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.209542 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="40500a46-a16b-4282-86e4-1d99277d7c7a" containerName="collect-profiles" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.210480 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.214291 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.224818 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.361985 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48d5e3e9-853c-4988-8746-a6f74e1fe209-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.362065 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d5e3e9-853c-4988-8746-a6f74e1fe209-config-data\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.362121 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48d5e3e9-853c-4988-8746-a6f74e1fe209-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.362140 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slfh7\" (UniqueName: \"kubernetes.io/projected/48d5e3e9-853c-4988-8746-a6f74e1fe209-kube-api-access-slfh7\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.362169 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d5e3e9-853c-4988-8746-a6f74e1fe209-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.362205 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48d5e3e9-853c-4988-8746-a6f74e1fe209-scripts\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.458876 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79fc56bc44-tfjh4" podUID="faf01cf3-b74b-46d8-b589-05ea0195ac24" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.182:9311/healthcheck\": read tcp 10.217.0.2:37770->10.217.0.182:9311: read: connection reset by peer" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.458888 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79fc56bc44-tfjh4" podUID="faf01cf3-b74b-46d8-b589-05ea0195ac24" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.182:9311/healthcheck\": read tcp 10.217.0.2:37756->10.217.0.182:9311: read: connection reset by peer" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.465183 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48d5e3e9-853c-4988-8746-a6f74e1fe209-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.465349 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d5e3e9-853c-4988-8746-a6f74e1fe209-config-data\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.465467 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48d5e3e9-853c-4988-8746-a6f74e1fe209-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.465498 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slfh7\" (UniqueName: \"kubernetes.io/projected/48d5e3e9-853c-4988-8746-a6f74e1fe209-kube-api-access-slfh7\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.465763 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d5e3e9-853c-4988-8746-a6f74e1fe209-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.465843 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48d5e3e9-853c-4988-8746-a6f74e1fe209-scripts\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.466042 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48d5e3e9-853c-4988-8746-a6f74e1fe209-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.480869 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d5e3e9-853c-4988-8746-a6f74e1fe209-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.485384 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slfh7\" (UniqueName: \"kubernetes.io/projected/48d5e3e9-853c-4988-8746-a6f74e1fe209-kube-api-access-slfh7\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.488944 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48d5e3e9-853c-4988-8746-a6f74e1fe209-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.492669 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48d5e3e9-853c-4988-8746-a6f74e1fe209-scripts\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.493497 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d5e3e9-853c-4988-8746-a6f74e1fe209-config-data\") pod \"cinder-scheduler-0\" (UID: \"48d5e3e9-853c-4988-8746-a6f74e1fe209\") " pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.574879 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.587206 4810 generic.go:334] "Generic (PLEG): container finished" podID="faf01cf3-b74b-46d8-b589-05ea0195ac24" containerID="800222f8de7a26806971959afe00821e5af0eb096530f9abfdb9f64bd62612cb" exitCode=0 Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.587282 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fc56bc44-tfjh4" event={"ID":"faf01cf3-b74b-46d8-b589-05ea0195ac24","Type":"ContainerDied","Data":"800222f8de7a26806971959afe00821e5af0eb096530f9abfdb9f64bd62612cb"} Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.603245 4810 generic.go:334] "Generic (PLEG): container finished" podID="fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" containerID="ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f" exitCode=0 Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.603298 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cc67d5fc8-hs8lf" event={"ID":"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0","Type":"ContainerDied","Data":"ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f"} Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.604504 4810 generic.go:334] "Generic (PLEG): container finished" podID="e0116ca5-826e-4a77-bc6f-11e89c047af8" containerID="0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f" exitCode=143 Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.604528 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68ff886dc8-nntj6" event={"ID":"e0116ca5-826e-4a77-bc6f-11e89c047af8","Type":"ContainerDied","Data":"0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f"} Feb 19 15:30:08 crc kubenswrapper[4810]: I0219 15:30:08.982965 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.089283 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlkqn\" (UniqueName: \"kubernetes.io/projected/faf01cf3-b74b-46d8-b589-05ea0195ac24-kube-api-access-dlkqn\") pod \"faf01cf3-b74b-46d8-b589-05ea0195ac24\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.089422 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf01cf3-b74b-46d8-b589-05ea0195ac24-logs\") pod \"faf01cf3-b74b-46d8-b589-05ea0195ac24\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.089679 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-combined-ca-bundle\") pod \"faf01cf3-b74b-46d8-b589-05ea0195ac24\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.089732 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-config-data\") pod \"faf01cf3-b74b-46d8-b589-05ea0195ac24\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.089756 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-config-data-custom\") pod \"faf01cf3-b74b-46d8-b589-05ea0195ac24\" (UID: \"faf01cf3-b74b-46d8-b589-05ea0195ac24\") " Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.098768 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf01cf3-b74b-46d8-b589-05ea0195ac24-kube-api-access-dlkqn" (OuterVolumeSpecName: "kube-api-access-dlkqn") pod "faf01cf3-b74b-46d8-b589-05ea0195ac24" (UID: "faf01cf3-b74b-46d8-b589-05ea0195ac24"). InnerVolumeSpecName "kube-api-access-dlkqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.099679 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf01cf3-b74b-46d8-b589-05ea0195ac24-logs" (OuterVolumeSpecName: "logs") pod "faf01cf3-b74b-46d8-b589-05ea0195ac24" (UID: "faf01cf3-b74b-46d8-b589-05ea0195ac24"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.123919 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "faf01cf3-b74b-46d8-b589-05ea0195ac24" (UID: "faf01cf3-b74b-46d8-b589-05ea0195ac24"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.199375 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf01cf3-b74b-46d8-b589-05ea0195ac24-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.199412 4810 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.199427 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlkqn\" (UniqueName: \"kubernetes.io/projected/faf01cf3-b74b-46d8-b589-05ea0195ac24-kube-api-access-dlkqn\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.204733 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-config-data" (OuterVolumeSpecName: "config-data") pod "faf01cf3-b74b-46d8-b589-05ea0195ac24" (UID: "faf01cf3-b74b-46d8-b589-05ea0195ac24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.219408 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faf01cf3-b74b-46d8-b589-05ea0195ac24" (UID: "faf01cf3-b74b-46d8-b589-05ea0195ac24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.251539 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.305948 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.305985 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf01cf3-b74b-46d8-b589-05ea0195ac24-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.358663 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.406934 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-scripts\") pod \"e0116ca5-826e-4a77-bc6f-11e89c047af8\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.407026 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-config-data\") pod \"e0116ca5-826e-4a77-bc6f-11e89c047af8\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.407066 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-internal-tls-certs\") pod \"e0116ca5-826e-4a77-bc6f-11e89c047af8\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.407099 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0116ca5-826e-4a77-bc6f-11e89c047af8-logs\") pod \"e0116ca5-826e-4a77-bc6f-11e89c047af8\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.407138 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-combined-ca-bundle\") pod \"e0116ca5-826e-4a77-bc6f-11e89c047af8\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.407199 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-public-tls-certs\") pod \"e0116ca5-826e-4a77-bc6f-11e89c047af8\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.407289 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln9c7\" (UniqueName: \"kubernetes.io/projected/e0116ca5-826e-4a77-bc6f-11e89c047af8-kube-api-access-ln9c7\") pod \"e0116ca5-826e-4a77-bc6f-11e89c047af8\" (UID: \"e0116ca5-826e-4a77-bc6f-11e89c047af8\") " Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.408061 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0116ca5-826e-4a77-bc6f-11e89c047af8-logs" (OuterVolumeSpecName: "logs") pod "e0116ca5-826e-4a77-bc6f-11e89c047af8" (UID: "e0116ca5-826e-4a77-bc6f-11e89c047af8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.415688 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0116ca5-826e-4a77-bc6f-11e89c047af8-kube-api-access-ln9c7" (OuterVolumeSpecName: "kube-api-access-ln9c7") pod "e0116ca5-826e-4a77-bc6f-11e89c047af8" (UID: "e0116ca5-826e-4a77-bc6f-11e89c047af8"). InnerVolumeSpecName "kube-api-access-ln9c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.426488 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-scripts" (OuterVolumeSpecName: "scripts") pod "e0116ca5-826e-4a77-bc6f-11e89c047af8" (UID: "e0116ca5-826e-4a77-bc6f-11e89c047af8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.465712 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3bd9969-3750-460b-95cd-8c52d2e44d82" path="/var/lib/kubelet/pods/e3bd9969-3750-460b-95cd-8c52d2e44d82/volumes" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.482434 4810 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podcd961c7d-d551-4f5b-a08a-07d088947698"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podcd961c7d-d551-4f5b-a08a-07d088947698] : Timed out while waiting for systemd to remove kubepods-besteffort-podcd961c7d_d551_4f5b_a08a_07d088947698.slice" Feb 19 15:30:09 crc kubenswrapper[4810]: E0219 15:30:09.482481 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podcd961c7d-d551-4f5b-a08a-07d088947698] : unable to destroy cgroup paths for cgroup [kubepods besteffort podcd961c7d-d551-4f5b-a08a-07d088947698] : Timed out while waiting for systemd to remove kubepods-besteffort-podcd961c7d_d551_4f5b_a08a_07d088947698.slice" pod="openstack/watcher-api-0" podUID="cd961c7d-d551-4f5b-a08a-07d088947698" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.510429 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln9c7\" (UniqueName: \"kubernetes.io/projected/e0116ca5-826e-4a77-bc6f-11e89c047af8-kube-api-access-ln9c7\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.510456 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.510466 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0116ca5-826e-4a77-bc6f-11e89c047af8-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.527647 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0116ca5-826e-4a77-bc6f-11e89c047af8" (UID: "e0116ca5-826e-4a77-bc6f-11e89c047af8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.546801 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-config-data" (OuterVolumeSpecName: "config-data") pod "e0116ca5-826e-4a77-bc6f-11e89c047af8" (UID: "e0116ca5-826e-4a77-bc6f-11e89c047af8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.574420 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e0116ca5-826e-4a77-bc6f-11e89c047af8" (UID: "e0116ca5-826e-4a77-bc6f-11e89c047af8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.599628 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e0116ca5-826e-4a77-bc6f-11e89c047af8" (UID: "e0116ca5-826e-4a77-bc6f-11e89c047af8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.617186 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.617208 4810 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.617218 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.617227 4810 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0116ca5-826e-4a77-bc6f-11e89c047af8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.644041 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-695bb7cdc6-72zs2_f0b73197-3c7e-44c3-8a49-35d9e0a40629/neutron-api/0.log" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.644294 4810 generic.go:334] "Generic (PLEG): container finished" podID="f0b73197-3c7e-44c3-8a49-35d9e0a40629" containerID="fc28d9c929a268b7ca57d72d22325d489071902ea7ff07035df3c9d98d6a728a" exitCode=137 Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.644418 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-695bb7cdc6-72zs2" event={"ID":"f0b73197-3c7e-44c3-8a49-35d9e0a40629","Type":"ContainerDied","Data":"fc28d9c929a268b7ca57d72d22325d489071902ea7ff07035df3c9d98d6a728a"} Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.658918 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fc56bc44-tfjh4" event={"ID":"faf01cf3-b74b-46d8-b589-05ea0195ac24","Type":"ContainerDied","Data":"9a79a3684682be966eafd1448e2f56c188e39ccc6a243e2540b5912c1bbf9c6c"} Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.658958 4810 scope.go:117] "RemoveContainer" containerID="800222f8de7a26806971959afe00821e5af0eb096530f9abfdb9f64bd62612cb" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.659060 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79fc56bc44-tfjh4" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.672116 4810 generic.go:334] "Generic (PLEG): container finished" podID="e0116ca5-826e-4a77-bc6f-11e89c047af8" containerID="4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3" exitCode=0 Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.673364 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68ff886dc8-nntj6" event={"ID":"e0116ca5-826e-4a77-bc6f-11e89c047af8","Type":"ContainerDied","Data":"4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3"} Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.673504 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68ff886dc8-nntj6" event={"ID":"e0116ca5-826e-4a77-bc6f-11e89c047af8","Type":"ContainerDied","Data":"4f57c96680174420000630479bd05d2a43d222e27a564e7d19f21df9961dd34c"} Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.673646 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68ff886dc8-nntj6" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.687065 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.687885 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"48d5e3e9-853c-4988-8746-a6f74e1fe209","Type":"ContainerStarted","Data":"40c17b4503d159ceb58fb8b116d2ec739c336b17c2e781d635cec7e728be1677"} Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.692498 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-79fc56bc44-tfjh4"] Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.700297 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-79fc56bc44-tfjh4"] Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.718218 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.742098 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.754564 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:30:09 crc kubenswrapper[4810]: E0219 15:30:09.754950 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0116ca5-826e-4a77-bc6f-11e89c047af8" containerName="placement-api" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.754962 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0116ca5-826e-4a77-bc6f-11e89c047af8" containerName="placement-api" Feb 19 15:30:09 crc kubenswrapper[4810]: E0219 15:30:09.754976 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf01cf3-b74b-46d8-b589-05ea0195ac24" containerName="barbican-api" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.754981 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf01cf3-b74b-46d8-b589-05ea0195ac24" containerName="barbican-api" Feb 19 15:30:09 crc kubenswrapper[4810]: E0219 15:30:09.754992 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0116ca5-826e-4a77-bc6f-11e89c047af8" containerName="placement-log" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.754997 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0116ca5-826e-4a77-bc6f-11e89c047af8" containerName="placement-log" Feb 19 15:30:09 crc kubenswrapper[4810]: E0219 15:30:09.755019 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf01cf3-b74b-46d8-b589-05ea0195ac24" containerName="barbican-api-log" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.755025 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf01cf3-b74b-46d8-b589-05ea0195ac24" containerName="barbican-api-log" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.755220 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0116ca5-826e-4a77-bc6f-11e89c047af8" containerName="placement-log" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.755234 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf01cf3-b74b-46d8-b589-05ea0195ac24" containerName="barbican-api" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.755244 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf01cf3-b74b-46d8-b589-05ea0195ac24" containerName="barbican-api-log" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.755253 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0116ca5-826e-4a77-bc6f-11e89c047af8" containerName="placement-api" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.756185 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.772480 4810 scope.go:117] "RemoveContainer" containerID="6c0cc91325e6d2565c8ddd9b1eede9d65e56a0efbeee132cbd5a27f02f0ed7ef" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.774835 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.793730 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.794355 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.794608 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.813918 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-68ff886dc8-nntj6"] Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.849833 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-68ff886dc8-nntj6"] Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.882936 4810 scope.go:117] "RemoveContainer" containerID="4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.940897 4810 scope.go:117] "RemoveContainer" containerID="0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.942825 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-public-tls-certs\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.942919 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.942971 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c772672c-c983-42e8-ae77-bfc8484ad555-logs\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.943045 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx6gm\" (UniqueName: \"kubernetes.io/projected/c772672c-c983-42e8-ae77-bfc8484ad555-kube-api-access-vx6gm\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.943122 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.943164 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.943192 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-config-data\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.977546 4810 scope.go:117] "RemoveContainer" containerID="4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3" Feb 19 15:30:09 crc kubenswrapper[4810]: E0219 15:30:09.978590 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3\": container with ID starting with 4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3 not found: ID does not exist" containerID="4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.978632 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3"} err="failed to get container status \"4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3\": rpc error: code = NotFound desc = could not find container \"4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3\": container with ID starting with 4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3 not found: ID does not exist" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.978663 4810 scope.go:117] "RemoveContainer" containerID="0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f" Feb 19 15:30:09 crc kubenswrapper[4810]: E0219 15:30:09.979292 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f\": container with ID starting with 0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f not found: ID does not exist" containerID="0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f" Feb 19 15:30:09 crc kubenswrapper[4810]: I0219 15:30:09.979315 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f"} err="failed to get container status \"0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f\": rpc error: code = NotFound desc = could not find container \"0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f\": container with ID starting with 0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f not found: ID does not exist" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.045064 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.045132 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c772672c-c983-42e8-ae77-bfc8484ad555-logs\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.045167 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx6gm\" (UniqueName: \"kubernetes.io/projected/c772672c-c983-42e8-ae77-bfc8484ad555-kube-api-access-vx6gm\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.045234 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.045257 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-config-data\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.045271 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.045314 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-public-tls-certs\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.048165 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c772672c-c983-42e8-ae77-bfc8484ad555-logs\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.059146 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.059896 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-public-tls-certs\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.064664 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.066296 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.066883 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx6gm\" (UniqueName: \"kubernetes.io/projected/c772672c-c983-42e8-ae77-bfc8484ad555-kube-api-access-vx6gm\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.068245 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-config-data\") pod \"watcher-api-0\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.153702 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-695bb7cdc6-72zs2_f0b73197-3c7e-44c3-8a49-35d9e0a40629/neutron-api/0.log" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.153783 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.183057 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.249206 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-httpd-config\") pod \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.249299 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drqw4\" (UniqueName: \"kubernetes.io/projected/f0b73197-3c7e-44c3-8a49-35d9e0a40629-kube-api-access-drqw4\") pod \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.249338 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-ovndb-tls-certs\") pod \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.249447 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-config\") pod \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.249604 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-combined-ca-bundle\") pod \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\" (UID: \"f0b73197-3c7e-44c3-8a49-35d9e0a40629\") " Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.255512 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f0b73197-3c7e-44c3-8a49-35d9e0a40629" (UID: "f0b73197-3c7e-44c3-8a49-35d9e0a40629"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.260482 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b73197-3c7e-44c3-8a49-35d9e0a40629-kube-api-access-drqw4" (OuterVolumeSpecName: "kube-api-access-drqw4") pod "f0b73197-3c7e-44c3-8a49-35d9e0a40629" (UID: "f0b73197-3c7e-44c3-8a49-35d9e0a40629"). InnerVolumeSpecName "kube-api-access-drqw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.323458 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-config" (OuterVolumeSpecName: "config") pod "f0b73197-3c7e-44c3-8a49-35d9e0a40629" (UID: "f0b73197-3c7e-44c3-8a49-35d9e0a40629"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.349170 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0b73197-3c7e-44c3-8a49-35d9e0a40629" (UID: "f0b73197-3c7e-44c3-8a49-35d9e0a40629"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.355751 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.355781 4810 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.355790 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drqw4\" (UniqueName: \"kubernetes.io/projected/f0b73197-3c7e-44c3-8a49-35d9e0a40629-kube-api-access-drqw4\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.355801 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.434027 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f0b73197-3c7e-44c3-8a49-35d9e0a40629" (UID: "f0b73197-3c7e-44c3-8a49-35d9e0a40629"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.457989 4810 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b73197-3c7e-44c3-8a49-35d9e0a40629-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.720654 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"48d5e3e9-853c-4988-8746-a6f74e1fe209","Type":"ContainerStarted","Data":"5e8701206c646e0d74b8b6626ec698e8e8367005c4d8a1860077b7cc9641a1b4"} Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.736815 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-695bb7cdc6-72zs2_f0b73197-3c7e-44c3-8a49-35d9e0a40629/neutron-api/0.log" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.736879 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-695bb7cdc6-72zs2" event={"ID":"f0b73197-3c7e-44c3-8a49-35d9e0a40629","Type":"ContainerDied","Data":"b18b549a747d3077270169b17318853029444572e9290aac2a4fb288910d92c3"} Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.736912 4810 scope.go:117] "RemoveContainer" containerID="dc4301bc9761619133902901d1997ab38671d0d229f4feca664d5a1fa8e30e7a" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.736979 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-695bb7cdc6-72zs2" Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.752647 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.795059 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-695bb7cdc6-72zs2"] Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.824202 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-695bb7cdc6-72zs2"] Feb 19 15:30:10 crc kubenswrapper[4810]: I0219 15:30:10.824473 4810 scope.go:117] "RemoveContainer" containerID="fc28d9c929a268b7ca57d72d22325d489071902ea7ff07035df3c9d98d6a728a" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.284762 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 15:30:11 crc kubenswrapper[4810]: E0219 15:30:11.285189 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b73197-3c7e-44c3-8a49-35d9e0a40629" containerName="neutron-api" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.285202 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b73197-3c7e-44c3-8a49-35d9e0a40629" containerName="neutron-api" Feb 19 15:30:11 crc kubenswrapper[4810]: E0219 15:30:11.285237 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b73197-3c7e-44c3-8a49-35d9e0a40629" containerName="neutron-httpd" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.285243 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b73197-3c7e-44c3-8a49-35d9e0a40629" containerName="neutron-httpd" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.285466 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b73197-3c7e-44c3-8a49-35d9e0a40629" containerName="neutron-httpd" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.285483 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b73197-3c7e-44c3-8a49-35d9e0a40629" containerName="neutron-api" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.286083 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.290127 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.290354 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-znmn2" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.290479 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.324612 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.392167 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.392219 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-openstack-config\") pod \"openstackclient\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.392299 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp9xc\" (UniqueName: \"kubernetes.io/projected/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-kube-api-access-jp9xc\") pod \"openstackclient\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.392350 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.450565 4810 scope.go:117] "RemoveContainer" containerID="a8232fee767eaee3fa026223e20673e25fa6d954ddb913e148bf6e0e435de416" Feb 19 15:30:11 crc kubenswrapper[4810]: E0219 15:30:11.450901 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(3eb2dccd-c5dc-436f-b7a6-954af7bc51c5)\"" pod="openstack/watcher-decision-engine-0" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.456827 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd961c7d-d551-4f5b-a08a-07d088947698" path="/var/lib/kubelet/pods/cd961c7d-d551-4f5b-a08a-07d088947698/volumes" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.457860 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0116ca5-826e-4a77-bc6f-11e89c047af8" path="/var/lib/kubelet/pods/e0116ca5-826e-4a77-bc6f-11e89c047af8/volumes" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.458708 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0b73197-3c7e-44c3-8a49-35d9e0a40629" path="/var/lib/kubelet/pods/f0b73197-3c7e-44c3-8a49-35d9e0a40629/volumes" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.460099 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf01cf3-b74b-46d8-b589-05ea0195ac24" path="/var/lib/kubelet/pods/faf01cf3-b74b-46d8-b589-05ea0195ac24/volumes" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.496255 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp9xc\" (UniqueName: \"kubernetes.io/projected/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-kube-api-access-jp9xc\") pod \"openstackclient\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.496538 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.496694 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.496783 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-openstack-config\") pod \"openstackclient\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.497651 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-openstack-config\") pod \"openstackclient\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.501197 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.501588 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.514499 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp9xc\" (UniqueName: \"kubernetes.io/projected/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-kube-api-access-jp9xc\") pod \"openstackclient\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.619978 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.630874 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.642400 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.666567 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.668282 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.691373 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.806930 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"48d5e3e9-853c-4988-8746-a6f74e1fe209","Type":"ContainerStarted","Data":"eddd4db00bb1996ace3587f0150b3115cfcb9b2033d3154b181b637f959bec85"} Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.813043 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca8eb29b-bb26-446f-8a22-5da13ff9d5fa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.813111 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ca8eb29b-bb26-446f-8a22-5da13ff9d5fa-openstack-config\") pod \"openstackclient\" (UID: \"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.813251 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrb2g\" (UniqueName: \"kubernetes.io/projected/ca8eb29b-bb26-446f-8a22-5da13ff9d5fa-kube-api-access-mrb2g\") pod \"openstackclient\" (UID: \"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.813445 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ca8eb29b-bb26-446f-8a22-5da13ff9d5fa-openstack-config-secret\") pod \"openstackclient\" (UID: \"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.825112 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.836857 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.836835585 podStartE2EDuration="3.836835585s" podCreationTimestamp="2026-02-19 15:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:30:11.832092229 +0000 UTC m=+1241.314122353" watchObservedRunningTime="2026-02-19 15:30:11.836835585 +0000 UTC m=+1241.318865719" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.839302 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c772672c-c983-42e8-ae77-bfc8484ad555","Type":"ContainerStarted","Data":"d9c5f503f212d6aa9b0ddd5ddc6ff440b6c542c6960be28ea86ea203ab7ca92d"} Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.839353 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c772672c-c983-42e8-ae77-bfc8484ad555","Type":"ContainerStarted","Data":"a3586f026120011072c0978ee9277dd85cce4dc7aaed51f25a7bea5aff2ae758"} Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.839363 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c772672c-c983-42e8-ae77-bfc8484ad555","Type":"ContainerStarted","Data":"e3006b677518772d18ede2c0df9a671f7e5d00f39c12f380e32799bbd51a8cab"} Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.840349 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 15:30:11 crc kubenswrapper[4810]: E0219 15:30:11.861686 4810 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 15:30:11 crc kubenswrapper[4810]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_fa26ff13-0dc1-4e6b-a9a8-80177afa17af_0(a09d4e2428d1f78397a2986858e27dcb31ce24f328f9b342286b54d06d27fdce): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a09d4e2428d1f78397a2986858e27dcb31ce24f328f9b342286b54d06d27fdce" Netns:"/var/run/netns/f3c74cd4-edce-430c-ab15-13ba451c1e9b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=a09d4e2428d1f78397a2986858e27dcb31ce24f328f9b342286b54d06d27fdce;K8S_POD_UID=fa26ff13-0dc1-4e6b-a9a8-80177afa17af" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/fa26ff13-0dc1-4e6b-a9a8-80177afa17af]: expected pod UID "fa26ff13-0dc1-4e6b-a9a8-80177afa17af" but got "ca8eb29b-bb26-446f-8a22-5da13ff9d5fa" from Kube API Feb 19 15:30:11 crc kubenswrapper[4810]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 15:30:11 crc kubenswrapper[4810]: > Feb 19 15:30:11 crc kubenswrapper[4810]: E0219 15:30:11.861750 4810 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 15:30:11 crc kubenswrapper[4810]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_fa26ff13-0dc1-4e6b-a9a8-80177afa17af_0(a09d4e2428d1f78397a2986858e27dcb31ce24f328f9b342286b54d06d27fdce): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a09d4e2428d1f78397a2986858e27dcb31ce24f328f9b342286b54d06d27fdce" Netns:"/var/run/netns/f3c74cd4-edce-430c-ab15-13ba451c1e9b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=a09d4e2428d1f78397a2986858e27dcb31ce24f328f9b342286b54d06d27fdce;K8S_POD_UID=fa26ff13-0dc1-4e6b-a9a8-80177afa17af" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/fa26ff13-0dc1-4e6b-a9a8-80177afa17af]: expected pod UID "fa26ff13-0dc1-4e6b-a9a8-80177afa17af" but got "ca8eb29b-bb26-446f-8a22-5da13ff9d5fa" from Kube API Feb 19 15:30:11 crc kubenswrapper[4810]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 15:30:11 crc kubenswrapper[4810]: > pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.888314 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.888294578 podStartE2EDuration="2.888294578s" podCreationTimestamp="2026-02-19 15:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:30:11.876381175 +0000 UTC m=+1241.358411299" watchObservedRunningTime="2026-02-19 15:30:11.888294578 +0000 UTC m=+1241.370324702" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.914811 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ca8eb29b-bb26-446f-8a22-5da13ff9d5fa-openstack-config-secret\") pod \"openstackclient\" (UID: \"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.914880 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca8eb29b-bb26-446f-8a22-5da13ff9d5fa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.914914 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ca8eb29b-bb26-446f-8a22-5da13ff9d5fa-openstack-config\") pod \"openstackclient\" (UID: \"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.915000 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrb2g\" (UniqueName: \"kubernetes.io/projected/ca8eb29b-bb26-446f-8a22-5da13ff9d5fa-kube-api-access-mrb2g\") pod \"openstackclient\" (UID: \"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.916482 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ca8eb29b-bb26-446f-8a22-5da13ff9d5fa-openstack-config\") pod \"openstackclient\" (UID: \"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.920868 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca8eb29b-bb26-446f-8a22-5da13ff9d5fa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.927753 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ca8eb29b-bb26-446f-8a22-5da13ff9d5fa-openstack-config-secret\") pod \"openstackclient\" (UID: \"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa\") " pod="openstack/openstackclient" Feb 19 15:30:11 crc kubenswrapper[4810]: I0219 15:30:11.940003 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrb2g\" (UniqueName: \"kubernetes.io/projected/ca8eb29b-bb26-446f-8a22-5da13ff9d5fa-kube-api-access-mrb2g\") pod \"openstackclient\" (UID: \"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa\") " pod="openstack/openstackclient" Feb 19 15:30:12 crc kubenswrapper[4810]: I0219 15:30:12.011709 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 15:30:12 crc kubenswrapper[4810]: I0219 15:30:12.567872 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 15:30:12 crc kubenswrapper[4810]: I0219 15:30:12.854798 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa","Type":"ContainerStarted","Data":"f38b0fd50dec54e94b9eec955297a6f01f6925bad6cb7973d5c0603fb8bdee87"} Feb 19 15:30:12 crc kubenswrapper[4810]: I0219 15:30:12.855100 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 15:30:12 crc kubenswrapper[4810]: I0219 15:30:12.859052 4810 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="fa26ff13-0dc1-4e6b-a9a8-80177afa17af" podUID="ca8eb29b-bb26-446f-8a22-5da13ff9d5fa" Feb 19 15:30:12 crc kubenswrapper[4810]: I0219 15:30:12.864701 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.032756 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-combined-ca-bundle\") pod \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.033018 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-openstack-config-secret\") pod \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.033076 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-openstack-config\") pod \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.033134 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp9xc\" (UniqueName: \"kubernetes.io/projected/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-kube-api-access-jp9xc\") pod \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\" (UID: \"fa26ff13-0dc1-4e6b-a9a8-80177afa17af\") " Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.033720 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fa26ff13-0dc1-4e6b-a9a8-80177afa17af" (UID: "fa26ff13-0dc1-4e6b-a9a8-80177afa17af"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.039467 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa26ff13-0dc1-4e6b-a9a8-80177afa17af" (UID: "fa26ff13-0dc1-4e6b-a9a8-80177afa17af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.052468 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fa26ff13-0dc1-4e6b-a9a8-80177afa17af" (UID: "fa26ff13-0dc1-4e6b-a9a8-80177afa17af"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.069226 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-kube-api-access-jp9xc" (OuterVolumeSpecName: "kube-api-access-jp9xc") pod "fa26ff13-0dc1-4e6b-a9a8-80177afa17af" (UID: "fa26ff13-0dc1-4e6b-a9a8-80177afa17af"). InnerVolumeSpecName "kube-api-access-jp9xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.137384 4810 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.137418 4810 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.137438 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp9xc\" (UniqueName: \"kubernetes.io/projected/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-kube-api-access-jp9xc\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.137449 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa26ff13-0dc1-4e6b-a9a8-80177afa17af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.458594 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa26ff13-0dc1-4e6b-a9a8-80177afa17af" path="/var/lib/kubelet/pods/fa26ff13-0dc1-4e6b-a9a8-80177afa17af/volumes" Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.576236 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.861583 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.861677 4810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 15:30:13 crc kubenswrapper[4810]: I0219 15:30:13.869807 4810 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="fa26ff13-0dc1-4e6b-a9a8-80177afa17af" podUID="ca8eb29b-bb26-446f-8a22-5da13ff9d5fa" Feb 19 15:30:14 crc kubenswrapper[4810]: I0219 15:30:14.154381 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.183546 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.824226 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-78bc5d479f-k79xx"] Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.827908 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.832231 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.832253 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.832421 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.847985 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-78bc5d479f-k79xx"] Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.910171 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpx4p\" (UniqueName: \"kubernetes.io/projected/9190a865-226b-487c-b0f9-2573f50f0eab-kube-api-access-fpx4p\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.910222 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9190a865-226b-487c-b0f9-2573f50f0eab-public-tls-certs\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.910623 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9190a865-226b-487c-b0f9-2573f50f0eab-combined-ca-bundle\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.910737 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9190a865-226b-487c-b0f9-2573f50f0eab-log-httpd\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.910852 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9190a865-226b-487c-b0f9-2573f50f0eab-etc-swift\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.911029 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9190a865-226b-487c-b0f9-2573f50f0eab-internal-tls-certs\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.911126 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9190a865-226b-487c-b0f9-2573f50f0eab-config-data\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:15 crc kubenswrapper[4810]: I0219 15:30:15.911420 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9190a865-226b-487c-b0f9-2573f50f0eab-run-httpd\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.013208 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9190a865-226b-487c-b0f9-2573f50f0eab-public-tls-certs\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.013270 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9190a865-226b-487c-b0f9-2573f50f0eab-combined-ca-bundle\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.013292 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9190a865-226b-487c-b0f9-2573f50f0eab-log-httpd\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.013318 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9190a865-226b-487c-b0f9-2573f50f0eab-etc-swift\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.013383 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9190a865-226b-487c-b0f9-2573f50f0eab-internal-tls-certs\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.013404 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9190a865-226b-487c-b0f9-2573f50f0eab-config-data\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.013474 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9190a865-226b-487c-b0f9-2573f50f0eab-run-httpd\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.013506 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpx4p\" (UniqueName: \"kubernetes.io/projected/9190a865-226b-487c-b0f9-2573f50f0eab-kube-api-access-fpx4p\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.014010 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9190a865-226b-487c-b0f9-2573f50f0eab-log-httpd\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.015024 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9190a865-226b-487c-b0f9-2573f50f0eab-run-httpd\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.032311 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9190a865-226b-487c-b0f9-2573f50f0eab-internal-tls-certs\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.032355 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9190a865-226b-487c-b0f9-2573f50f0eab-public-tls-certs\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.036344 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9190a865-226b-487c-b0f9-2573f50f0eab-config-data\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.039165 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9190a865-226b-487c-b0f9-2573f50f0eab-combined-ca-bundle\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.040830 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpx4p\" (UniqueName: \"kubernetes.io/projected/9190a865-226b-487c-b0f9-2573f50f0eab-kube-api-access-fpx4p\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.049578 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9190a865-226b-487c-b0f9-2573f50f0eab-etc-swift\") pod \"swift-proxy-78bc5d479f-k79xx\" (UID: \"9190a865-226b-487c-b0f9-2573f50f0eab\") " pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.153027 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.716340 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-78bc5d479f-k79xx"] Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.847658 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.929708 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78bc5d479f-k79xx" event={"ID":"9190a865-226b-487c-b0f9-2573f50f0eab","Type":"ContainerStarted","Data":"62eeb2ac06b126e5d4a0047f8650016eb7c2ac3c1f1e63aa5c7a49e0c1bf5ec8"} Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.935752 4810 generic.go:334] "Generic (PLEG): container finished" podID="fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" containerID="4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737" exitCode=0 Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.935799 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cc67d5fc8-hs8lf" event={"ID":"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0","Type":"ContainerDied","Data":"4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737"} Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.935844 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cc67d5fc8-hs8lf" event={"ID":"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0","Type":"ContainerDied","Data":"d1b99914aff75d6854dd18c44e4702b0f89c549bb21d23b94afcb4182e4386df"} Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.935865 4810 scope.go:117] "RemoveContainer" containerID="ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.936076 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cc67d5fc8-hs8lf" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.936403 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk7nj\" (UniqueName: \"kubernetes.io/projected/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-kube-api-access-lk7nj\") pod \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.936590 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-config\") pod \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.936632 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-httpd-config\") pod \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.936653 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-combined-ca-bundle\") pod \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.936679 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-ovndb-tls-certs\") pod \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\" (UID: \"fb06ec19-dfd7-459f-9bfc-6c3e1619abb0\") " Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.944033 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" (UID: "fb06ec19-dfd7-459f-9bfc-6c3e1619abb0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.947348 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-kube-api-access-lk7nj" (OuterVolumeSpecName: "kube-api-access-lk7nj") pod "fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" (UID: "fb06ec19-dfd7-459f-9bfc-6c3e1619abb0"). InnerVolumeSpecName "kube-api-access-lk7nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.963891 4810 scope.go:117] "RemoveContainer" containerID="4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.991035 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" (UID: "fb06ec19-dfd7-459f-9bfc-6c3e1619abb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:16 crc kubenswrapper[4810]: I0219 15:30:16.997582 4810 scope.go:117] "RemoveContainer" containerID="ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f" Feb 19 15:30:17 crc kubenswrapper[4810]: E0219 15:30:17.001434 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f\": container with ID starting with ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f not found: ID does not exist" containerID="ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.001464 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f"} err="failed to get container status \"ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f\": rpc error: code = NotFound desc = could not find container \"ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f\": container with ID starting with ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f not found: ID does not exist" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.001484 4810 scope.go:117] "RemoveContainer" containerID="4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737" Feb 19 15:30:17 crc kubenswrapper[4810]: E0219 15:30:17.002317 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737\": container with ID starting with 4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737 not found: ID does not exist" containerID="4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.002383 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737"} err="failed to get container status \"4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737\": rpc error: code = NotFound desc = could not find container \"4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737\": container with ID starting with 4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737 not found: ID does not exist" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.008990 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-config" (OuterVolumeSpecName: "config") pod "fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" (UID: "fb06ec19-dfd7-459f-9bfc-6c3e1619abb0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.042597 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk7nj\" (UniqueName: \"kubernetes.io/projected/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-kube-api-access-lk7nj\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.042628 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.042639 4810 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.042649 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.054293 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" (UID: "fb06ec19-dfd7-459f-9bfc-6c3e1619abb0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.143958 4810 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.272370 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cc67d5fc8-hs8lf"] Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.282439 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6cc67d5fc8-hs8lf"] Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.457807 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" path="/var/lib/kubelet/pods/fb06ec19-dfd7-459f-9bfc-6c3e1619abb0/volumes" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.947089 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78bc5d479f-k79xx" event={"ID":"9190a865-226b-487c-b0f9-2573f50f0eab","Type":"ContainerStarted","Data":"f8ed6d381d40b2e3ef03d951a46de99b3c3bafe1807814ddeddf5352cea41872"} Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.947158 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78bc5d479f-k79xx" event={"ID":"9190a865-226b-487c-b0f9-2573f50f0eab","Type":"ContainerStarted","Data":"e2fc9c32a96f65c2fbdc2383696138996d9691b72d3892e004a27f696c08ac35"} Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.948738 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.948765 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:17 crc kubenswrapper[4810]: I0219 15:30:17.976240 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-78bc5d479f-k79xx" podStartSLOduration=2.976223467 podStartE2EDuration="2.976223467s" podCreationTimestamp="2026-02-19 15:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:30:17.969494382 +0000 UTC m=+1247.451524536" watchObservedRunningTime="2026-02-19 15:30:17.976223467 +0000 UTC m=+1247.458253591" Feb 19 15:30:18 crc kubenswrapper[4810]: I0219 15:30:18.163230 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-869f57798-ngdtl" podUID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 19 15:30:18 crc kubenswrapper[4810]: I0219 15:30:18.163714 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.753643 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0116ca5_826e_4a77_bc6f_11e89c047af8.slice/crio-4f57c96680174420000630479bd05d2a43d222e27a564e7d19f21df9961dd34c": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0116ca5_826e_4a77_bc6f_11e89c047af8.slice/crio-4f57c96680174420000630479bd05d2a43d222e27a564e7d19f21df9961dd34c: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.753966 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9534ee_827a_49fb_8588_a5a8a494be3c.slice/crio-32fbc60f0dab31f943ed4d231a260fb910bacf46dc6517badaa0cc870b972e03": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9534ee_827a_49fb_8588_a5a8a494be3c.slice/crio-32fbc60f0dab31f943ed4d231a260fb910bacf46dc6517badaa0cc870b972e03: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.753989 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0116ca5_826e_4a77_bc6f_11e89c047af8.slice/crio-conmon-0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0116ca5_826e_4a77_bc6f_11e89c047af8.slice/crio-conmon-0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.754007 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b73197_3c7e_44c3_8a49_35d9e0a40629.slice/crio-b18b549a747d3077270169b17318853029444572e9290aac2a4fb288910d92c3": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b73197_3c7e_44c3_8a49_35d9e0a40629.slice/crio-b18b549a747d3077270169b17318853029444572e9290aac2a4fb288910d92c3: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.754035 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0116ca5_826e_4a77_bc6f_11e89c047af8.slice/crio-0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0116ca5_826e_4a77_bc6f_11e89c047af8.slice/crio-0f0fdb4ec7d48ce5efa4d4ce421e22663d31ffdf324056a1695cd08d0b02000f.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.754054 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9534ee_827a_49fb_8588_a5a8a494be3c.slice/crio-conmon-77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9534ee_827a_49fb_8588_a5a8a494be3c.slice/crio-conmon-77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.754073 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9534ee_827a_49fb_8588_a5a8a494be3c.slice/crio-77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9534ee_827a_49fb_8588_a5a8a494be3c.slice/crio-77f0991b435da4c5e7f87592511070821c3d2ea7637aa55b69e68f2a981e4095.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.757127 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b73197_3c7e_44c3_8a49_35d9e0a40629.slice/crio-conmon-fc28d9c929a268b7ca57d72d22325d489071902ea7ff07035df3c9d98d6a728a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b73197_3c7e_44c3_8a49_35d9e0a40629.slice/crio-conmon-fc28d9c929a268b7ca57d72d22325d489071902ea7ff07035df3c9d98d6a728a.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.757164 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b73197_3c7e_44c3_8a49_35d9e0a40629.slice/crio-fc28d9c929a268b7ca57d72d22325d489071902ea7ff07035df3c9d98d6a728a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b73197_3c7e_44c3_8a49_35d9e0a40629.slice/crio-fc28d9c929a268b7ca57d72d22325d489071902ea7ff07035df3c9d98d6a728a.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.757185 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0116ca5_826e_4a77_bc6f_11e89c047af8.slice/crio-conmon-4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0116ca5_826e_4a77_bc6f_11e89c047af8.slice/crio-conmon-4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.757200 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0116ca5_826e_4a77_bc6f_11e89c047af8.slice/crio-4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0116ca5_826e_4a77_bc6f_11e89c047af8.slice/crio-4ac6a4c63292ab530356cb491f1c19b551ba50fa93c7f50d74148f2ff94c41a3.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.757220 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b73197_3c7e_44c3_8a49_35d9e0a40629.slice/crio-conmon-dc4301bc9761619133902901d1997ab38671d0d229f4feca664d5a1fa8e30e7a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b73197_3c7e_44c3_8a49_35d9e0a40629.slice/crio-conmon-dc4301bc9761619133902901d1997ab38671d0d229f4feca664d5a1fa8e30e7a.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.757238 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9534ee_827a_49fb_8588_a5a8a494be3c.slice/crio-conmon-f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9534ee_827a_49fb_8588_a5a8a494be3c.slice/crio-conmon-f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.757258 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b73197_3c7e_44c3_8a49_35d9e0a40629.slice/crio-dc4301bc9761619133902901d1997ab38671d0d229f4feca664d5a1fa8e30e7a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b73197_3c7e_44c3_8a49_35d9e0a40629.slice/crio-dc4301bc9761619133902901d1997ab38671d0d229f4feca664d5a1fa8e30e7a.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.757276 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9534ee_827a_49fb_8588_a5a8a494be3c.slice/crio-f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9534ee_827a_49fb_8588_a5a8a494be3c.slice/crio-f7ecab250fb259bafddfe5958b677b33308b00c3f83a99f4416f709cb8747963.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.757529 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb06ec19_dfd7_459f_9bfc_6c3e1619abb0.slice/crio-ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f.scope WatchSource:0}: Error finding container ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f: Status 404 returned error can't find the container with id ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.758451 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb06ec19_dfd7_459f_9bfc_6c3e1619abb0.slice/crio-4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737.scope WatchSource:0}: Error finding container 4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737: Status 404 returned error can't find the container with id 4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737 Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.758672 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod848dfe9d_05f4_4ba9_919e_23e9a7ae63d5.slice/crio-conmon-61fee9f5cc97dc9164d9a8b37259645ec27704b544f8031e79cd8630294aa448.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod848dfe9d_05f4_4ba9_919e_23e9a7ae63d5.slice/crio-conmon-61fee9f5cc97dc9164d9a8b37259645ec27704b544f8031e79cd8630294aa448.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.758705 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod848dfe9d_05f4_4ba9_919e_23e9a7ae63d5.slice/crio-61fee9f5cc97dc9164d9a8b37259645ec27704b544f8031e79cd8630294aa448.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod848dfe9d_05f4_4ba9_919e_23e9a7ae63d5.slice/crio-61fee9f5cc97dc9164d9a8b37259645ec27704b544f8031e79cd8630294aa448.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.786154 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1f44651_e4eb_4cce_a493_9dd9b491b22a.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1f44651_e4eb_4cce_a493_9dd9b491b22a.slice: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.811061 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf01cf3_b74b_46d8_b589_05ea0195ac24.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf01cf3_b74b_46d8_b589_05ea0195ac24.slice: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.811455 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cd44d0_7395_44e1_9112_9e8bb4198b93.slice/crio-conmon-f669c7fcc5250fb7338c38806127e385c8e63f008e071b7e746e08ad63a54a09.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cd44d0_7395_44e1_9112_9e8bb4198b93.slice/crio-conmon-f669c7fcc5250fb7338c38806127e385c8e63f008e071b7e746e08ad63a54a09.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.811546 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cd44d0_7395_44e1_9112_9e8bb4198b93.slice/crio-f669c7fcc5250fb7338c38806127e385c8e63f008e071b7e746e08ad63a54a09.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cd44d0_7395_44e1_9112_9e8bb4198b93.slice/crio-f669c7fcc5250fb7338c38806127e385c8e63f008e071b7e746e08ad63a54a09.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.812301 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb2dccd_c5dc_436f_b7a6_954af7bc51c5.slice/crio-conmon-a8232fee767eaee3fa026223e20673e25fa6d954ddb913e148bf6e0e435de416.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb2dccd_c5dc_436f_b7a6_954af7bc51c5.slice/crio-conmon-a8232fee767eaee3fa026223e20673e25fa6d954ddb913e148bf6e0e435de416.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.812582 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb2dccd_c5dc_436f_b7a6_954af7bc51c5.slice/crio-a8232fee767eaee3fa026223e20673e25fa6d954ddb913e148bf6e0e435de416.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb2dccd_c5dc_436f_b7a6_954af7bc51c5.slice/crio-a8232fee767eaee3fa026223e20673e25fa6d954ddb913e148bf6e0e435de416.scope: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.815398 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3bd9969_3750_460b_95cd_8c52d2e44d82.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3bd9969_3750_460b_95cd_8c52d2e44d82.slice: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.825150 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5db557a_0b89_4a02_b1b2_19bc205acee8.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5db557a_0b89_4a02_b1b2_19bc205acee8.slice: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.838160 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40500a46_a16b_4282_86e4_1d99277d7c7a.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40500a46_a16b_4282_86e4_1d99277d7c7a.slice: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: W0219 15:30:18.848260 4810 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa26ff13_0dc1_4e6b_a9a8_80177afa17af.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa26ff13_0dc1_4e6b_a9a8_80177afa17af.slice: no such file or directory Feb 19 15:30:18 crc kubenswrapper[4810]: I0219 15:30:18.849231 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 15:30:18 crc kubenswrapper[4810]: I0219 15:30:18.956804 4810 generic.go:334] "Generic (PLEG): container finished" podID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerID="f669c7fcc5250fb7338c38806127e385c8e63f008e071b7e746e08ad63a54a09" exitCode=137 Feb 19 15:30:18 crc kubenswrapper[4810]: I0219 15:30:18.957782 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8cd44d0-7395-44e1-9112-9e8bb4198b93","Type":"ContainerDied","Data":"f669c7fcc5250fb7338c38806127e385c8e63f008e071b7e746e08ad63a54a09"} Feb 19 15:30:19 crc kubenswrapper[4810]: E0219 15:30:19.040206 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb06ec19_dfd7_459f_9bfc_6c3e1619abb0.slice/crio-conmon-ec03377efcba956dd84e47222d7f2fa8900a14d68a543b851a67f801c749c23f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb2dccd_c5dc_436f_b7a6_954af7bc51c5.slice/crio-conmon-ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08eca88c_a4b4_461b_8568_ebbf54645272.slice/crio-cc512cef3a5d05465ad16d8e6be38874fe910640b0911d914b7146b795d387d4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc39b1dd9_9e73_4cca_aea6_e228f1ba5942.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08eca88c_a4b4_461b_8568_ebbf54645272.slice/crio-conmon-11bb73290ec186744ef4e88375d87c281860032daa137cc8bd4779ab70117f2b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2024a783_c3f9_4e57_b00f_52bec164e64e.slice/crio-conmon-ad28bfaa41efd8e4e6c465f81c081a0451a00386412156a5267acdd97840a40b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd961c7d_d551_4f5b_a08a_07d088947698.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2024a783_c3f9_4e57_b00f_52bec164e64e.slice/crio-ad28bfaa41efd8e4e6c465f81c081a0451a00386412156a5267acdd97840a40b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb2dccd_c5dc_436f_b7a6_954af7bc51c5.slice/crio-ea64343d53131969232078700626ee5996ba47b778be09a59da7ab42c9aa2631.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cd44d0_7395_44e1_9112_9e8bb4198b93.slice/crio-conmon-48bd8312dc5f2e91c1a4d6b015bb83960b232d7ff3a764add13cbac66bd0441f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18ca6546_69fd_492d_81c5_bb18c56b045d.slice/crio-conmon-acf279e70fd332fdcdd2bf83f0303bb99e19cd10482ff8ab44a134fa747add8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc39b1dd9_9e73_4cca_aea6_e228f1ba5942.slice/crio-conmon-267e2a2aa6b67d8303b6404df33bfa4941b4f403604fb949cf5ec932e82ab1b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb06ec19_dfd7_459f_9bfc_6c3e1619abb0.slice/crio-d1b99914aff75d6854dd18c44e4702b0f89c549bb21d23b94afcb4182e4386df\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9534ee_827a_49fb_8588_a5a8a494be3c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cd44d0_7395_44e1_9112_9e8bb4198b93.slice/crio-conmon-1b6e83705ca6e6c238d2cc26ae7440d08d4c7c41779dea1e88d4da0c7c6c4ca7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cd44d0_7395_44e1_9112_9e8bb4198b93.slice/crio-48bd8312dc5f2e91c1a4d6b015bb83960b232d7ff3a764add13cbac66bd0441f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc39b1dd9_9e73_4cca_aea6_e228f1ba5942.slice/crio-267e2a2aa6b67d8303b6404df33bfa4941b4f403604fb949cf5ec932e82ab1b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc39b1dd9_9e73_4cca_aea6_e228f1ba5942.slice/crio-conmon-09ef2e9038e53c0d3694b2f3cb1b25543038065797746360e5b00060a157df5b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2024a783_c3f9_4e57_b00f_52bec164e64e.slice/crio-398a9dcb2e74ac56cd2827ea038790abeb16b5f6b3573a48b306842a166c3f44\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b73197_3c7e_44c3_8a49_35d9e0a40629.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc39b1dd9_9e73_4cca_aea6_e228f1ba5942.slice/crio-f7e61fd52ad6569907f8a84cbc32aef547486ffda35028466938ada8d5e3aa10\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb06ec19_dfd7_459f_9bfc_6c3e1619abb0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18ca6546_69fd_492d_81c5_bb18c56b045d.slice/crio-acf279e70fd332fdcdd2bf83f0303bb99e19cd10482ff8ab44a134fa747add8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08eca88c_a4b4_461b_8568_ebbf54645272.slice/crio-11bb73290ec186744ef4e88375d87c281860032daa137cc8bd4779ab70117f2b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0116ca5_826e_4a77_bc6f_11e89c047af8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18ca6546_69fd_492d_81c5_bb18c56b045d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd961c7d_d551_4f5b_a08a_07d088947698.slice/crio-3ca7fe1f4f8bad9a3a06d89d4141ff28e32b10f6d3445d4b3f2404b6e71c942f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08eca88c_a4b4_461b_8568_ebbf54645272.slice/crio-conmon-cc512cef3a5d05465ad16d8e6be38874fe910640b0911d914b7146b795d387d4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cd44d0_7395_44e1_9112_9e8bb4198b93.slice/crio-1b6e83705ca6e6c238d2cc26ae7440d08d4c7c41779dea1e88d4da0c7c6c4ca7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod848dfe9d_05f4_4ba9_919e_23e9a7ae63d5.slice/crio-ce811eb96390a8f8365be6dee0926b85f5365cb868957ed22165ebbf7d343712\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08eca88c_a4b4_461b_8568_ebbf54645272.slice/crio-b936d9f6773d6dc856145fa11df03aba298425ab6b8a943cdd257e7a943da84e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd961c7d_d551_4f5b_a08a_07d088947698.slice/crio-conmon-aa9a5e6b8de561c312023bc0224fd25e600c4ac446d6ec5ee19e031c464523e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb06ec19_dfd7_459f_9bfc_6c3e1619abb0.slice/crio-conmon-4e847a42f9e6fb1b9f7435b4df35d14a3fd82483b8e4fe2e6eb4d640b1450737.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd961c7d_d551_4f5b_a08a_07d088947698.slice/crio-aa9a5e6b8de561c312023bc0224fd25e600c4ac446d6ec5ee19e031c464523e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18ca6546_69fd_492d_81c5_bb18c56b045d.slice/crio-3e6a529b000841e709c2e1d05c4d119a28c8f20c2ece39574181d8df78a6c626\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08eca88c_a4b4_461b_8568_ebbf54645272.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2024a783_c3f9_4e57_b00f_52bec164e64e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc39b1dd9_9e73_4cca_aea6_e228f1ba5942.slice/crio-09ef2e9038e53c0d3694b2f3cb1b25543038065797746360e5b00060a157df5b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58c845f2_0069_4ee5_9d4b_b5871e078926.slice/crio-5ab1bae28a55f588686fefd9b6e6ee98c22d6657796a662570ae5cd62319bd13.scope\": RecentStats: unable to find data in memory cache]" Feb 19 15:30:19 crc kubenswrapper[4810]: I0219 15:30:19.125184 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:19 crc kubenswrapper[4810]: I0219 15:30:19.125224 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:19 crc kubenswrapper[4810]: I0219 15:30:19.125926 4810 scope.go:117] "RemoveContainer" containerID="a8232fee767eaee3fa026223e20673e25fa6d954ddb913e148bf6e0e435de416" Feb 19 15:30:20 crc kubenswrapper[4810]: I0219 15:30:20.183453 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 19 15:30:20 crc kubenswrapper[4810]: I0219 15:30:20.194397 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 19 15:30:20 crc kubenswrapper[4810]: I0219 15:30:20.288863 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.163:3000/\": dial tcp 10.217.0.163:3000: connect: connection refused" Feb 19 15:30:21 crc kubenswrapper[4810]: I0219 15:30:21.156466 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 15:30:21 crc kubenswrapper[4810]: I0219 15:30:21.165449 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.051924 4810 generic.go:334] "Generic (PLEG): container finished" podID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerID="b7473c6c07a1c77d67ffe62af3e5c262ab61dca816caf8aab0acb14dc5b23ebd" exitCode=137 Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.052100 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-869f57798-ngdtl" event={"ID":"58c845f2-0069-4ee5-9d4b-b5871e078926","Type":"ContainerDied","Data":"b7473c6c07a1c77d67ffe62af3e5c262ab61dca816caf8aab0acb14dc5b23ebd"} Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.762298 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.797915 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.903480 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-combined-ca-bundle\") pod \"58c845f2-0069-4ee5-9d4b-b5871e078926\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.903569 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-horizon-tls-certs\") pod \"58c845f2-0069-4ee5-9d4b-b5871e078926\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.903607 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd77w\" (UniqueName: \"kubernetes.io/projected/b8cd44d0-7395-44e1-9112-9e8bb4198b93-kube-api-access-jd77w\") pod \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.903635 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-sg-core-conf-yaml\") pod \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.903759 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8cd44d0-7395-44e1-9112-9e8bb4198b93-run-httpd\") pod \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.903790 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8cd44d0-7395-44e1-9112-9e8bb4198b93-log-httpd\") pod \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.903841 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58c845f2-0069-4ee5-9d4b-b5871e078926-config-data\") pod \"58c845f2-0069-4ee5-9d4b-b5871e078926\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.903875 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-combined-ca-bundle\") pod \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.903894 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c845f2-0069-4ee5-9d4b-b5871e078926-logs\") pod \"58c845f2-0069-4ee5-9d4b-b5871e078926\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.903920 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-horizon-secret-key\") pod \"58c845f2-0069-4ee5-9d4b-b5871e078926\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.903988 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-scripts\") pod \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.904048 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-config-data\") pod \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\" (UID: \"b8cd44d0-7395-44e1-9112-9e8bb4198b93\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.904089 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58c845f2-0069-4ee5-9d4b-b5871e078926-scripts\") pod \"58c845f2-0069-4ee5-9d4b-b5871e078926\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.904108 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjb4j\" (UniqueName: \"kubernetes.io/projected/58c845f2-0069-4ee5-9d4b-b5871e078926-kube-api-access-zjb4j\") pod \"58c845f2-0069-4ee5-9d4b-b5871e078926\" (UID: \"58c845f2-0069-4ee5-9d4b-b5871e078926\") " Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.904426 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58c845f2-0069-4ee5-9d4b-b5871e078926-logs" (OuterVolumeSpecName: "logs") pod "58c845f2-0069-4ee5-9d4b-b5871e078926" (UID: "58c845f2-0069-4ee5-9d4b-b5871e078926"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.904463 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8cd44d0-7395-44e1-9112-9e8bb4198b93-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b8cd44d0-7395-44e1-9112-9e8bb4198b93" (UID: "b8cd44d0-7395-44e1-9112-9e8bb4198b93"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.904614 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8cd44d0-7395-44e1-9112-9e8bb4198b93-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b8cd44d0-7395-44e1-9112-9e8bb4198b93" (UID: "b8cd44d0-7395-44e1-9112-9e8bb4198b93"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.904923 4810 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8cd44d0-7395-44e1-9112-9e8bb4198b93-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.904941 4810 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8cd44d0-7395-44e1-9112-9e8bb4198b93-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.904955 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58c845f2-0069-4ee5-9d4b-b5871e078926-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.913686 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c845f2-0069-4ee5-9d4b-b5871e078926-kube-api-access-zjb4j" (OuterVolumeSpecName: "kube-api-access-zjb4j") pod "58c845f2-0069-4ee5-9d4b-b5871e078926" (UID: "58c845f2-0069-4ee5-9d4b-b5871e078926"). InnerVolumeSpecName "kube-api-access-zjb4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.913887 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8cd44d0-7395-44e1-9112-9e8bb4198b93-kube-api-access-jd77w" (OuterVolumeSpecName: "kube-api-access-jd77w") pod "b8cd44d0-7395-44e1-9112-9e8bb4198b93" (UID: "b8cd44d0-7395-44e1-9112-9e8bb4198b93"). InnerVolumeSpecName "kube-api-access-jd77w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.915990 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-scripts" (OuterVolumeSpecName: "scripts") pod "b8cd44d0-7395-44e1-9112-9e8bb4198b93" (UID: "b8cd44d0-7395-44e1-9112-9e8bb4198b93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.916534 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "58c845f2-0069-4ee5-9d4b-b5871e078926" (UID: "58c845f2-0069-4ee5-9d4b-b5871e078926"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.940102 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58c845f2-0069-4ee5-9d4b-b5871e078926-scripts" (OuterVolumeSpecName: "scripts") pod "58c845f2-0069-4ee5-9d4b-b5871e078926" (UID: "58c845f2-0069-4ee5-9d4b-b5871e078926"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.941701 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b8cd44d0-7395-44e1-9112-9e8bb4198b93" (UID: "b8cd44d0-7395-44e1-9112-9e8bb4198b93"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.948621 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58c845f2-0069-4ee5-9d4b-b5871e078926-config-data" (OuterVolumeSpecName: "config-data") pod "58c845f2-0069-4ee5-9d4b-b5871e078926" (UID: "58c845f2-0069-4ee5-9d4b-b5871e078926"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.951050 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58c845f2-0069-4ee5-9d4b-b5871e078926" (UID: "58c845f2-0069-4ee5-9d4b-b5871e078926"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.970297 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "58c845f2-0069-4ee5-9d4b-b5871e078926" (UID: "58c845f2-0069-4ee5-9d4b-b5871e078926"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:25 crc kubenswrapper[4810]: I0219 15:30:25.999688 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8cd44d0-7395-44e1-9112-9e8bb4198b93" (UID: "b8cd44d0-7395-44e1-9112-9e8bb4198b93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.007379 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58c845f2-0069-4ee5-9d4b-b5871e078926-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.007418 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjb4j\" (UniqueName: \"kubernetes.io/projected/58c845f2-0069-4ee5-9d4b-b5871e078926-kube-api-access-zjb4j\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.007435 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.007448 4810 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.007464 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd77w\" (UniqueName: \"kubernetes.io/projected/b8cd44d0-7395-44e1-9112-9e8bb4198b93-kube-api-access-jd77w\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.007474 4810 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.007486 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58c845f2-0069-4ee5-9d4b-b5871e078926-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.007497 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.007509 4810 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/58c845f2-0069-4ee5-9d4b-b5871e078926-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.007520 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.010753 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-config-data" (OuterVolumeSpecName: "config-data") pod "b8cd44d0-7395-44e1-9112-9e8bb4198b93" (UID: "b8cd44d0-7395-44e1-9112-9e8bb4198b93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.064859 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-869f57798-ngdtl" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.064853 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-869f57798-ngdtl" event={"ID":"58c845f2-0069-4ee5-9d4b-b5871e078926","Type":"ContainerDied","Data":"dd8cab14dea5221ed6bc57de9b6e6053cd08e7d2f18677d44feb73bc0f3396df"} Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.065229 4810 scope.go:117] "RemoveContainer" containerID="5ab1bae28a55f588686fefd9b6e6ee98c22d6657796a662570ae5cd62319bd13" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.067032 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ca8eb29b-bb26-446f-8a22-5da13ff9d5fa","Type":"ContainerStarted","Data":"4d94f7f4346decf2db2e822e04fe90605e7ee0d72e5a98985a765bf82968ebc4"} Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.069432 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5","Type":"ContainerStarted","Data":"4e69a7ec2d6ffef7e8de5181b7cb6f418ce564ddea149f05d15804b56bd3283e"} Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.082952 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8cd44d0-7395-44e1-9112-9e8bb4198b93","Type":"ContainerDied","Data":"89e242fd2781af0141617d648aab74ae173b61b87fc7bb8cdef6002ffaed43fc"} Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.083202 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.101805 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.912537385 podStartE2EDuration="15.101785245s" podCreationTimestamp="2026-02-19 15:30:11 +0000 UTC" firstStartedPulling="2026-02-19 15:30:12.574213475 +0000 UTC m=+1242.056243599" lastFinishedPulling="2026-02-19 15:30:25.763461335 +0000 UTC m=+1255.245491459" observedRunningTime="2026-02-19 15:30:26.09139356 +0000 UTC m=+1255.573423704" watchObservedRunningTime="2026-02-19 15:30:26.101785245 +0000 UTC m=+1255.583815389" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.109820 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd44d0-7395-44e1-9112-9e8bb4198b93-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.200000 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-78bc5d479f-k79xx" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.215118 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.241471 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.254469 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-869f57798-ngdtl"] Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.259957 4810 scope.go:117] "RemoveContainer" containerID="b7473c6c07a1c77d67ffe62af3e5c262ab61dca816caf8aab0acb14dc5b23ebd" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.297660 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-869f57798-ngdtl"] Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.307990 4810 scope.go:117] "RemoveContainer" containerID="f669c7fcc5250fb7338c38806127e385c8e63f008e071b7e746e08ad63a54a09" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.346456 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:30:26 crc kubenswrapper[4810]: E0219 15:30:26.347751 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerName="horizon-log" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.347775 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerName="horizon-log" Feb 19 15:30:26 crc kubenswrapper[4810]: E0219 15:30:26.347817 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerName="ceilometer-notification-agent" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.347829 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerName="ceilometer-notification-agent" Feb 19 15:30:26 crc kubenswrapper[4810]: E0219 15:30:26.347844 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerName="sg-core" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.347851 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerName="sg-core" Feb 19 15:30:26 crc kubenswrapper[4810]: E0219 15:30:26.347862 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerName="proxy-httpd" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.347903 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerName="proxy-httpd" Feb 19 15:30:26 crc kubenswrapper[4810]: E0219 15:30:26.347922 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" containerName="neutron-api" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.347929 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" containerName="neutron-api" Feb 19 15:30:26 crc kubenswrapper[4810]: E0219 15:30:26.347944 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" containerName="neutron-httpd" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.358228 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" containerName="neutron-httpd" Feb 19 15:30:26 crc kubenswrapper[4810]: E0219 15:30:26.358296 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerName="horizon" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.358306 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerName="horizon" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.358685 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerName="proxy-httpd" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.358713 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerName="ceilometer-notification-agent" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.358735 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" containerName="sg-core" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.358747 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" containerName="neutron-httpd" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.358767 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb06ec19-dfd7-459f-9bfc-6c3e1619abb0" containerName="neutron-api" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.358780 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerName="horizon" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.358797 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c845f2-0069-4ee5-9d4b-b5871e078926" containerName="horizon-log" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.354417 4810 scope.go:117] "RemoveContainer" containerID="1b6e83705ca6e6c238d2cc26ae7440d08d4c7c41779dea1e88d4da0c7c6c4ca7" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.370586 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.370690 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.378010 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.378250 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.392261 4810 scope.go:117] "RemoveContainer" containerID="48bd8312dc5f2e91c1a4d6b015bb83960b232d7ff3a764add13cbac66bd0441f" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.520509 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e03c99b3-d5d6-479a-9b45-045bda62be1e-log-httpd\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.520567 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.520593 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-config-data\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.520679 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-scripts\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.520744 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.520794 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e03c99b3-d5d6-479a-9b45-045bda62be1e-run-httpd\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.520847 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vpjk\" (UniqueName: \"kubernetes.io/projected/e03c99b3-d5d6-479a-9b45-045bda62be1e-kube-api-access-8vpjk\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.622282 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-scripts\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.622378 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.622412 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e03c99b3-d5d6-479a-9b45-045bda62be1e-run-httpd\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.622460 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vpjk\" (UniqueName: \"kubernetes.io/projected/e03c99b3-d5d6-479a-9b45-045bda62be1e-kube-api-access-8vpjk\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.622541 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e03c99b3-d5d6-479a-9b45-045bda62be1e-log-httpd\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.622571 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.622586 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-config-data\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.622892 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e03c99b3-d5d6-479a-9b45-045bda62be1e-run-httpd\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.623116 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e03c99b3-d5d6-479a-9b45-045bda62be1e-log-httpd\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.627187 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.627251 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.628183 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-config-data\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.634442 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-scripts\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.640427 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vpjk\" (UniqueName: \"kubernetes.io/projected/e03c99b3-d5d6-479a-9b45-045bda62be1e-kube-api-access-8vpjk\") pod \"ceilometer-0\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " pod="openstack/ceilometer-0" Feb 19 15:30:26 crc kubenswrapper[4810]: I0219 15:30:26.714700 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:30:27 crc kubenswrapper[4810]: I0219 15:30:27.165573 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:30:27 crc kubenswrapper[4810]: W0219 15:30:27.170761 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode03c99b3_d5d6_479a_9b45_045bda62be1e.slice/crio-28190f8c14a28d468a745076cbf90c346864c9199f363697f039ea4c27bb2c7f WatchSource:0}: Error finding container 28190f8c14a28d468a745076cbf90c346864c9199f363697f039ea4c27bb2c7f: Status 404 returned error can't find the container with id 28190f8c14a28d468a745076cbf90c346864c9199f363697f039ea4c27bb2c7f Feb 19 15:30:27 crc kubenswrapper[4810]: I0219 15:30:27.481547 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c845f2-0069-4ee5-9d4b-b5871e078926" path="/var/lib/kubelet/pods/58c845f2-0069-4ee5-9d4b-b5871e078926/volumes" Feb 19 15:30:27 crc kubenswrapper[4810]: I0219 15:30:27.484471 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8cd44d0-7395-44e1-9112-9e8bb4198b93" path="/var/lib/kubelet/pods/b8cd44d0-7395-44e1-9112-9e8bb4198b93/volumes" Feb 19 15:30:28 crc kubenswrapper[4810]: I0219 15:30:28.104956 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e03c99b3-d5d6-479a-9b45-045bda62be1e","Type":"ContainerStarted","Data":"2d5f4181bcf65b08449001da5a939c392bf7453a34f071c7f2e2b29a55dbc3c9"} Feb 19 15:30:28 crc kubenswrapper[4810]: I0219 15:30:28.105258 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e03c99b3-d5d6-479a-9b45-045bda62be1e","Type":"ContainerStarted","Data":"922169bd474755c022d1869b56b5508937a05f190fe4b150d2505095771f9d93"} Feb 19 15:30:28 crc kubenswrapper[4810]: I0219 15:30:28.105268 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e03c99b3-d5d6-479a-9b45-045bda62be1e","Type":"ContainerStarted","Data":"28190f8c14a28d468a745076cbf90c346864c9199f363697f039ea4c27bb2c7f"} Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.001679 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.115198 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e03c99b3-d5d6-479a-9b45-045bda62be1e","Type":"ContainerStarted","Data":"75f4bc2020206ce4f5c18d99d5247f762fe2e1e5470498d43115d0a1b1bc5184"} Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.124718 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.160748 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.743286 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-8ftxl"] Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.744420 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8ftxl" Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.764779 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8ftxl"] Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.844301 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-nxd5j"] Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.845539 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nxd5j" Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.859389 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nxd5j"] Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.881723 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da5e0166-d811-4dcd-9230-976dd1893c11-operator-scripts\") pod \"nova-api-db-create-8ftxl\" (UID: \"da5e0166-d811-4dcd-9230-976dd1893c11\") " pod="openstack/nova-api-db-create-8ftxl" Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.881794 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxxfl\" (UniqueName: \"kubernetes.io/projected/da5e0166-d811-4dcd-9230-976dd1893c11-kube-api-access-sxxfl\") pod \"nova-api-db-create-8ftxl\" (UID: \"da5e0166-d811-4dcd-9230-976dd1893c11\") " pod="openstack/nova-api-db-create-8ftxl" Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.973753 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6e67-account-create-update-lk6cv"] Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.975271 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6e67-account-create-update-lk6cv" Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.980632 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.983662 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da5e0166-d811-4dcd-9230-976dd1893c11-operator-scripts\") pod \"nova-api-db-create-8ftxl\" (UID: \"da5e0166-d811-4dcd-9230-976dd1893c11\") " pod="openstack/nova-api-db-create-8ftxl" Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.983719 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f04e1699-2be0-4dca-8e4a-73035fde359f-operator-scripts\") pod \"nova-cell0-db-create-nxd5j\" (UID: \"f04e1699-2be0-4dca-8e4a-73035fde359f\") " pod="openstack/nova-cell0-db-create-nxd5j" Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.983776 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxxfl\" (UniqueName: \"kubernetes.io/projected/da5e0166-d811-4dcd-9230-976dd1893c11-kube-api-access-sxxfl\") pod \"nova-api-db-create-8ftxl\" (UID: \"da5e0166-d811-4dcd-9230-976dd1893c11\") " pod="openstack/nova-api-db-create-8ftxl" Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.983838 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-254j4\" (UniqueName: \"kubernetes.io/projected/f04e1699-2be0-4dca-8e4a-73035fde359f-kube-api-access-254j4\") pod \"nova-cell0-db-create-nxd5j\" (UID: \"f04e1699-2be0-4dca-8e4a-73035fde359f\") " pod="openstack/nova-cell0-db-create-nxd5j" Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.984680 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da5e0166-d811-4dcd-9230-976dd1893c11-operator-scripts\") pod \"nova-api-db-create-8ftxl\" (UID: \"da5e0166-d811-4dcd-9230-976dd1893c11\") " pod="openstack/nova-api-db-create-8ftxl" Feb 19 15:30:29 crc kubenswrapper[4810]: I0219 15:30:29.985702 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6e67-account-create-update-lk6cv"] Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.047247 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxxfl\" (UniqueName: \"kubernetes.io/projected/da5e0166-d811-4dcd-9230-976dd1893c11-kube-api-access-sxxfl\") pod \"nova-api-db-create-8ftxl\" (UID: \"da5e0166-d811-4dcd-9230-976dd1893c11\") " pod="openstack/nova-api-db-create-8ftxl" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.057650 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-w2s7h"] Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.058933 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-w2s7h" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.066728 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8ftxl" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.067548 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-w2s7h"] Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.099245 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6d2b\" (UniqueName: \"kubernetes.io/projected/48e0d5d9-1d58-41a5-b740-8c8286edec31-kube-api-access-n6d2b\") pod \"nova-api-6e67-account-create-update-lk6cv\" (UID: \"48e0d5d9-1d58-41a5-b740-8c8286edec31\") " pod="openstack/nova-api-6e67-account-create-update-lk6cv" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.099316 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48e0d5d9-1d58-41a5-b740-8c8286edec31-operator-scripts\") pod \"nova-api-6e67-account-create-update-lk6cv\" (UID: \"48e0d5d9-1d58-41a5-b740-8c8286edec31\") " pod="openstack/nova-api-6e67-account-create-update-lk6cv" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.099543 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f04e1699-2be0-4dca-8e4a-73035fde359f-operator-scripts\") pod \"nova-cell0-db-create-nxd5j\" (UID: \"f04e1699-2be0-4dca-8e4a-73035fde359f\") " pod="openstack/nova-cell0-db-create-nxd5j" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.099659 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-254j4\" (UniqueName: \"kubernetes.io/projected/f04e1699-2be0-4dca-8e4a-73035fde359f-kube-api-access-254j4\") pod \"nova-cell0-db-create-nxd5j\" (UID: \"f04e1699-2be0-4dca-8e4a-73035fde359f\") " pod="openstack/nova-cell0-db-create-nxd5j" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.100844 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f04e1699-2be0-4dca-8e4a-73035fde359f-operator-scripts\") pod \"nova-cell0-db-create-nxd5j\" (UID: \"f04e1699-2be0-4dca-8e4a-73035fde359f\") " pod="openstack/nova-cell0-db-create-nxd5j" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.151153 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-254j4\" (UniqueName: \"kubernetes.io/projected/f04e1699-2be0-4dca-8e4a-73035fde359f-kube-api-access-254j4\") pod \"nova-cell0-db-create-nxd5j\" (UID: \"f04e1699-2be0-4dca-8e4a-73035fde359f\") " pod="openstack/nova-cell0-db-create-nxd5j" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.159710 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.164529 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nxd5j" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.166976 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-841f-account-create-update-swd7q"] Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.168281 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-841f-account-create-update-swd7q" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.181549 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-841f-account-create-update-swd7q"] Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.184745 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.201505 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/987d17ad-1427-4709-b5db-19fbb00e8a7c-operator-scripts\") pod \"nova-cell1-db-create-w2s7h\" (UID: \"987d17ad-1427-4709-b5db-19fbb00e8a7c\") " pod="openstack/nova-cell1-db-create-w2s7h" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.201565 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svb2k\" (UniqueName: \"kubernetes.io/projected/987d17ad-1427-4709-b5db-19fbb00e8a7c-kube-api-access-svb2k\") pod \"nova-cell1-db-create-w2s7h\" (UID: \"987d17ad-1427-4709-b5db-19fbb00e8a7c\") " pod="openstack/nova-cell1-db-create-w2s7h" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.201638 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6d2b\" (UniqueName: \"kubernetes.io/projected/48e0d5d9-1d58-41a5-b740-8c8286edec31-kube-api-access-n6d2b\") pod \"nova-api-6e67-account-create-update-lk6cv\" (UID: \"48e0d5d9-1d58-41a5-b740-8c8286edec31\") " pod="openstack/nova-api-6e67-account-create-update-lk6cv" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.201661 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48e0d5d9-1d58-41a5-b740-8c8286edec31-operator-scripts\") pod \"nova-api-6e67-account-create-update-lk6cv\" (UID: \"48e0d5d9-1d58-41a5-b740-8c8286edec31\") " pod="openstack/nova-api-6e67-account-create-update-lk6cv" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.202470 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48e0d5d9-1d58-41a5-b740-8c8286edec31-operator-scripts\") pod \"nova-api-6e67-account-create-update-lk6cv\" (UID: \"48e0d5d9-1d58-41a5-b740-8c8286edec31\") " pod="openstack/nova-api-6e67-account-create-update-lk6cv" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.218686 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.233114 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6d2b\" (UniqueName: \"kubernetes.io/projected/48e0d5d9-1d58-41a5-b740-8c8286edec31-kube-api-access-n6d2b\") pod \"nova-api-6e67-account-create-update-lk6cv\" (UID: \"48e0d5d9-1d58-41a5-b740-8c8286edec31\") " pod="openstack/nova-api-6e67-account-create-update-lk6cv" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.296806 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.302795 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6e67-account-create-update-lk6cv" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.304206 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1703853-2754-4348-8c45-dcd98ff5d429-operator-scripts\") pod \"nova-cell0-841f-account-create-update-swd7q\" (UID: \"d1703853-2754-4348-8c45-dcd98ff5d429\") " pod="openstack/nova-cell0-841f-account-create-update-swd7q" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.304257 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/987d17ad-1427-4709-b5db-19fbb00e8a7c-operator-scripts\") pod \"nova-cell1-db-create-w2s7h\" (UID: \"987d17ad-1427-4709-b5db-19fbb00e8a7c\") " pod="openstack/nova-cell1-db-create-w2s7h" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.304290 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svb2k\" (UniqueName: \"kubernetes.io/projected/987d17ad-1427-4709-b5db-19fbb00e8a7c-kube-api-access-svb2k\") pod \"nova-cell1-db-create-w2s7h\" (UID: \"987d17ad-1427-4709-b5db-19fbb00e8a7c\") " pod="openstack/nova-cell1-db-create-w2s7h" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.304306 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w8q5\" (UniqueName: \"kubernetes.io/projected/d1703853-2754-4348-8c45-dcd98ff5d429-kube-api-access-4w8q5\") pod \"nova-cell0-841f-account-create-update-swd7q\" (UID: \"d1703853-2754-4348-8c45-dcd98ff5d429\") " pod="openstack/nova-cell0-841f-account-create-update-swd7q" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.308701 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/987d17ad-1427-4709-b5db-19fbb00e8a7c-operator-scripts\") pod \"nova-cell1-db-create-w2s7h\" (UID: \"987d17ad-1427-4709-b5db-19fbb00e8a7c\") " pod="openstack/nova-cell1-db-create-w2s7h" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.359036 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-da99-account-create-update-4j7hb"] Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.362370 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da99-account-create-update-4j7hb" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.368028 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svb2k\" (UniqueName: \"kubernetes.io/projected/987d17ad-1427-4709-b5db-19fbb00e8a7c-kube-api-access-svb2k\") pod \"nova-cell1-db-create-w2s7h\" (UID: \"987d17ad-1427-4709-b5db-19fbb00e8a7c\") " pod="openstack/nova-cell1-db-create-w2s7h" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.372787 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.374784 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-da99-account-create-update-4j7hb"] Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.408464 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1703853-2754-4348-8c45-dcd98ff5d429-operator-scripts\") pod \"nova-cell0-841f-account-create-update-swd7q\" (UID: \"d1703853-2754-4348-8c45-dcd98ff5d429\") " pod="openstack/nova-cell0-841f-account-create-update-swd7q" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.408804 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8q5\" (UniqueName: \"kubernetes.io/projected/d1703853-2754-4348-8c45-dcd98ff5d429-kube-api-access-4w8q5\") pod \"nova-cell0-841f-account-create-update-swd7q\" (UID: \"d1703853-2754-4348-8c45-dcd98ff5d429\") " pod="openstack/nova-cell0-841f-account-create-update-swd7q" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.409713 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1703853-2754-4348-8c45-dcd98ff5d429-operator-scripts\") pod \"nova-cell0-841f-account-create-update-swd7q\" (UID: \"d1703853-2754-4348-8c45-dcd98ff5d429\") " pod="openstack/nova-cell0-841f-account-create-update-swd7q" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.427485 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w8q5\" (UniqueName: \"kubernetes.io/projected/d1703853-2754-4348-8c45-dcd98ff5d429-kube-api-access-4w8q5\") pod \"nova-cell0-841f-account-create-update-swd7q\" (UID: \"d1703853-2754-4348-8c45-dcd98ff5d429\") " pod="openstack/nova-cell0-841f-account-create-update-swd7q" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.510431 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t982\" (UniqueName: \"kubernetes.io/projected/1f19eb06-d11c-409b-8b7e-516c9a5db815-kube-api-access-9t982\") pod \"nova-cell1-da99-account-create-update-4j7hb\" (UID: \"1f19eb06-d11c-409b-8b7e-516c9a5db815\") " pod="openstack/nova-cell1-da99-account-create-update-4j7hb" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.510586 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f19eb06-d11c-409b-8b7e-516c9a5db815-operator-scripts\") pod \"nova-cell1-da99-account-create-update-4j7hb\" (UID: \"1f19eb06-d11c-409b-8b7e-516c9a5db815\") " pod="openstack/nova-cell1-da99-account-create-update-4j7hb" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.612723 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t982\" (UniqueName: \"kubernetes.io/projected/1f19eb06-d11c-409b-8b7e-516c9a5db815-kube-api-access-9t982\") pod \"nova-cell1-da99-account-create-update-4j7hb\" (UID: \"1f19eb06-d11c-409b-8b7e-516c9a5db815\") " pod="openstack/nova-cell1-da99-account-create-update-4j7hb" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.612836 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f19eb06-d11c-409b-8b7e-516c9a5db815-operator-scripts\") pod \"nova-cell1-da99-account-create-update-4j7hb\" (UID: \"1f19eb06-d11c-409b-8b7e-516c9a5db815\") " pod="openstack/nova-cell1-da99-account-create-update-4j7hb" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.613848 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f19eb06-d11c-409b-8b7e-516c9a5db815-operator-scripts\") pod \"nova-cell1-da99-account-create-update-4j7hb\" (UID: \"1f19eb06-d11c-409b-8b7e-516c9a5db815\") " pod="openstack/nova-cell1-da99-account-create-update-4j7hb" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.631250 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t982\" (UniqueName: \"kubernetes.io/projected/1f19eb06-d11c-409b-8b7e-516c9a5db815-kube-api-access-9t982\") pod \"nova-cell1-da99-account-create-update-4j7hb\" (UID: \"1f19eb06-d11c-409b-8b7e-516c9a5db815\") " pod="openstack/nova-cell1-da99-account-create-update-4j7hb" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.660596 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-w2s7h" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.695865 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-841f-account-create-update-swd7q" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.732391 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da99-account-create-update-4j7hb" Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.808326 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8ftxl"] Feb 19 15:30:30 crc kubenswrapper[4810]: I0219 15:30:30.927863 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nxd5j"] Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.028733 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6e67-account-create-update-lk6cv"] Feb 19 15:30:31 crc kubenswrapper[4810]: W0219 15:30:31.038588 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48e0d5d9_1d58_41a5_b740_8c8286edec31.slice/crio-9a0a850973879b8c16a53cdf917d4a3ec51f5d1f0bea3d2e4973de66d44880c7 WatchSource:0}: Error finding container 9a0a850973879b8c16a53cdf917d4a3ec51f5d1f0bea3d2e4973de66d44880c7: Status 404 returned error can't find the container with id 9a0a850973879b8c16a53cdf917d4a3ec51f5d1f0bea3d2e4973de66d44880c7 Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.175660 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8ftxl" event={"ID":"da5e0166-d811-4dcd-9230-976dd1893c11","Type":"ContainerStarted","Data":"ea790df5d5f36cd5c53b9c8735a8d922caa18ea278fcafd7ab38eca09a4e4d29"} Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.181739 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e03c99b3-d5d6-479a-9b45-045bda62be1e","Type":"ContainerStarted","Data":"d0173ca27d32542d93975f960f3c985b101c2043974bb22943e94cfa2c3990e5"} Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.181973 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="ceilometer-central-agent" containerID="cri-o://922169bd474755c022d1869b56b5508937a05f190fe4b150d2505095771f9d93" gracePeriod=30 Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.182061 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.182259 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="proxy-httpd" containerID="cri-o://d0173ca27d32542d93975f960f3c985b101c2043974bb22943e94cfa2c3990e5" gracePeriod=30 Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.182533 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="sg-core" containerID="cri-o://75f4bc2020206ce4f5c18d99d5247f762fe2e1e5470498d43115d0a1b1bc5184" gracePeriod=30 Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.182520 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="ceilometer-notification-agent" containerID="cri-o://2d5f4181bcf65b08449001da5a939c392bf7453a34f071c7f2e2b29a55dbc3c9" gracePeriod=30 Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.187775 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6e67-account-create-update-lk6cv" event={"ID":"48e0d5d9-1d58-41a5-b740-8c8286edec31","Type":"ContainerStarted","Data":"9a0a850973879b8c16a53cdf917d4a3ec51f5d1f0bea3d2e4973de66d44880c7"} Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.195741 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nxd5j" event={"ID":"f04e1699-2be0-4dca-8e4a-73035fde359f","Type":"ContainerStarted","Data":"19ce66453832709a411387f21a96253dcdb5acebfcdfa40ee91e2d5e02a077dc"} Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.215172 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.88857707 podStartE2EDuration="5.215151138s" podCreationTimestamp="2026-02-19 15:30:26 +0000 UTC" firstStartedPulling="2026-02-19 15:30:27.173386294 +0000 UTC m=+1256.655416418" lastFinishedPulling="2026-02-19 15:30:30.499960362 +0000 UTC m=+1259.981990486" observedRunningTime="2026-02-19 15:30:31.206837614 +0000 UTC m=+1260.688867738" watchObservedRunningTime="2026-02-19 15:30:31.215151138 +0000 UTC m=+1260.697181262" Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.279682 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-w2s7h"] Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.300717 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-841f-account-create-update-swd7q"] Feb 19 15:30:31 crc kubenswrapper[4810]: W0219 15:30:31.316591 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1703853_2754_4348_8c45_dcd98ff5d429.slice/crio-8b87b6cef7dd1cc9076c484a82249ed624df4e78c547c11a7768a92577f632cf WatchSource:0}: Error finding container 8b87b6cef7dd1cc9076c484a82249ed624df4e78c547c11a7768a92577f632cf: Status 404 returned error can't find the container with id 8b87b6cef7dd1cc9076c484a82249ed624df4e78c547c11a7768a92577f632cf Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.494817 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-da99-account-create-update-4j7hb"] Feb 19 15:30:31 crc kubenswrapper[4810]: I0219 15:30:31.570915 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.217557 4810 generic.go:334] "Generic (PLEG): container finished" podID="da5e0166-d811-4dcd-9230-976dd1893c11" containerID="99cf896833f13eecd3fedefc31f58e2b88d17d37a7cb7ae1aea233b7d9a39af1" exitCode=0 Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.217839 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8ftxl" event={"ID":"da5e0166-d811-4dcd-9230-976dd1893c11","Type":"ContainerDied","Data":"99cf896833f13eecd3fedefc31f58e2b88d17d37a7cb7ae1aea233b7d9a39af1"} Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.223776 4810 generic.go:334] "Generic (PLEG): container finished" podID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerID="d0173ca27d32542d93975f960f3c985b101c2043974bb22943e94cfa2c3990e5" exitCode=0 Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.223815 4810 generic.go:334] "Generic (PLEG): container finished" podID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerID="75f4bc2020206ce4f5c18d99d5247f762fe2e1e5470498d43115d0a1b1bc5184" exitCode=2 Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.223824 4810 generic.go:334] "Generic (PLEG): container finished" podID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerID="2d5f4181bcf65b08449001da5a939c392bf7453a34f071c7f2e2b29a55dbc3c9" exitCode=0 Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.223859 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e03c99b3-d5d6-479a-9b45-045bda62be1e","Type":"ContainerDied","Data":"d0173ca27d32542d93975f960f3c985b101c2043974bb22943e94cfa2c3990e5"} Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.223882 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e03c99b3-d5d6-479a-9b45-045bda62be1e","Type":"ContainerDied","Data":"75f4bc2020206ce4f5c18d99d5247f762fe2e1e5470498d43115d0a1b1bc5184"} Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.223891 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e03c99b3-d5d6-479a-9b45-045bda62be1e","Type":"ContainerDied","Data":"2d5f4181bcf65b08449001da5a939c392bf7453a34f071c7f2e2b29a55dbc3c9"} Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.228777 4810 generic.go:334] "Generic (PLEG): container finished" podID="d1703853-2754-4348-8c45-dcd98ff5d429" containerID="ead759deef71357ae0d9ddba72b509ea84ac0664aab15baecfde700a4dc84f66" exitCode=0 Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.228832 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-841f-account-create-update-swd7q" event={"ID":"d1703853-2754-4348-8c45-dcd98ff5d429","Type":"ContainerDied","Data":"ead759deef71357ae0d9ddba72b509ea84ac0664aab15baecfde700a4dc84f66"} Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.228855 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-841f-account-create-update-swd7q" event={"ID":"d1703853-2754-4348-8c45-dcd98ff5d429","Type":"ContainerStarted","Data":"8b87b6cef7dd1cc9076c484a82249ed624df4e78c547c11a7768a92577f632cf"} Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.230398 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da99-account-create-update-4j7hb" event={"ID":"1f19eb06-d11c-409b-8b7e-516c9a5db815","Type":"ContainerStarted","Data":"7cb43c21f053a8d03036f06cd4952d1e70925f82267f48d6f2c4959f93a370e5"} Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.230424 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da99-account-create-update-4j7hb" event={"ID":"1f19eb06-d11c-409b-8b7e-516c9a5db815","Type":"ContainerStarted","Data":"5b4c6c0a8687bad9f0c8df95b6b32fefcb3920c41eb3bf6e882a708592401faf"} Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.234686 4810 generic.go:334] "Generic (PLEG): container finished" podID="48e0d5d9-1d58-41a5-b740-8c8286edec31" containerID="a519fabbf15898bb4c345dee03c392f33a7ca3106e889528c9a61a815ff5b000" exitCode=0 Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.234731 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6e67-account-create-update-lk6cv" event={"ID":"48e0d5d9-1d58-41a5-b740-8c8286edec31","Type":"ContainerDied","Data":"a519fabbf15898bb4c345dee03c392f33a7ca3106e889528c9a61a815ff5b000"} Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.255526 4810 generic.go:334] "Generic (PLEG): container finished" podID="f04e1699-2be0-4dca-8e4a-73035fde359f" containerID="2fa804cbc29144cbaa9d2e4c3f648166e91009da2ed6d113042e7022e9308b2c" exitCode=0 Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.255595 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nxd5j" event={"ID":"f04e1699-2be0-4dca-8e4a-73035fde359f","Type":"ContainerDied","Data":"2fa804cbc29144cbaa9d2e4c3f648166e91009da2ed6d113042e7022e9308b2c"} Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.267317 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-da99-account-create-update-4j7hb" podStartSLOduration=2.267295249 podStartE2EDuration="2.267295249s" podCreationTimestamp="2026-02-19 15:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:30:32.25390546 +0000 UTC m=+1261.735935584" watchObservedRunningTime="2026-02-19 15:30:32.267295249 +0000 UTC m=+1261.749325373" Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.267761 4810 generic.go:334] "Generic (PLEG): container finished" podID="987d17ad-1427-4709-b5db-19fbb00e8a7c" containerID="d738b05f8038fce0f6f7dca977b306ea2c9695f4bc8b38cb001bb799b15410d3" exitCode=0 Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.267926 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" containerID="cri-o://4e69a7ec2d6ffef7e8de5181b7cb6f418ce564ddea149f05d15804b56bd3283e" gracePeriod=30 Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.268099 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-w2s7h" event={"ID":"987d17ad-1427-4709-b5db-19fbb00e8a7c","Type":"ContainerDied","Data":"d738b05f8038fce0f6f7dca977b306ea2c9695f4bc8b38cb001bb799b15410d3"} Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.268138 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-w2s7h" event={"ID":"987d17ad-1427-4709-b5db-19fbb00e8a7c","Type":"ContainerStarted","Data":"8653c17c2feefd4ce9a0d05b4110170b6e33cc6884775621ff1b3f7d64b78a17"} Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.312617 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.312942 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="2ea855ba-523c-4143-8fe8-b0b1150299d0" containerName="watcher-applier" containerID="cri-o://9386dd3b59b24a748e770d6384d92f3e8aff8a701badb29067310dec0fb2fbb8" gracePeriod=30 Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.399387 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.399824 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="c772672c-c983-42e8-ae77-bfc8484ad555" containerName="watcher-api-log" containerID="cri-o://a3586f026120011072c0978ee9277dd85cce4dc7aaed51f25a7bea5aff2ae758" gracePeriod=30 Feb 19 15:30:32 crc kubenswrapper[4810]: I0219 15:30:32.400254 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="c772672c-c983-42e8-ae77-bfc8484ad555" containerName="watcher-api" containerID="cri-o://d9c5f503f212d6aa9b0ddd5ddc6ff440b6c542c6960be28ea86ea203ab7ca92d" gracePeriod=30 Feb 19 15:30:32 crc kubenswrapper[4810]: E0219 15:30:32.893259 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9386dd3b59b24a748e770d6384d92f3e8aff8a701badb29067310dec0fb2fbb8" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 15:30:32 crc kubenswrapper[4810]: E0219 15:30:32.897944 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9386dd3b59b24a748e770d6384d92f3e8aff8a701badb29067310dec0fb2fbb8" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 15:30:32 crc kubenswrapper[4810]: E0219 15:30:32.901313 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9386dd3b59b24a748e770d6384d92f3e8aff8a701badb29067310dec0fb2fbb8" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 15:30:32 crc kubenswrapper[4810]: E0219 15:30:32.901400 4810 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="2ea855ba-523c-4143-8fe8-b0b1150299d0" containerName="watcher-applier" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.299189 4810 generic.go:334] "Generic (PLEG): container finished" podID="1f19eb06-d11c-409b-8b7e-516c9a5db815" containerID="7cb43c21f053a8d03036f06cd4952d1e70925f82267f48d6f2c4959f93a370e5" exitCode=0 Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.299418 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da99-account-create-update-4j7hb" event={"ID":"1f19eb06-d11c-409b-8b7e-516c9a5db815","Type":"ContainerDied","Data":"7cb43c21f053a8d03036f06cd4952d1e70925f82267f48d6f2c4959f93a370e5"} Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.313561 4810 generic.go:334] "Generic (PLEG): container finished" podID="2ea855ba-523c-4143-8fe8-b0b1150299d0" containerID="9386dd3b59b24a748e770d6384d92f3e8aff8a701badb29067310dec0fb2fbb8" exitCode=0 Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.313648 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"2ea855ba-523c-4143-8fe8-b0b1150299d0","Type":"ContainerDied","Data":"9386dd3b59b24a748e770d6384d92f3e8aff8a701badb29067310dec0fb2fbb8"} Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.327823 4810 generic.go:334] "Generic (PLEG): container finished" podID="c772672c-c983-42e8-ae77-bfc8484ad555" containerID="a3586f026120011072c0978ee9277dd85cce4dc7aaed51f25a7bea5aff2ae758" exitCode=143 Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.328040 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c772672c-c983-42e8-ae77-bfc8484ad555","Type":"ContainerDied","Data":"a3586f026120011072c0978ee9277dd85cce4dc7aaed51f25a7bea5aff2ae758"} Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.497603 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.580456 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea855ba-523c-4143-8fe8-b0b1150299d0-combined-ca-bundle\") pod \"2ea855ba-523c-4143-8fe8-b0b1150299d0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.580510 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea855ba-523c-4143-8fe8-b0b1150299d0-logs\") pod \"2ea855ba-523c-4143-8fe8-b0b1150299d0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.580636 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea855ba-523c-4143-8fe8-b0b1150299d0-config-data\") pod \"2ea855ba-523c-4143-8fe8-b0b1150299d0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.580774 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f72k9\" (UniqueName: \"kubernetes.io/projected/2ea855ba-523c-4143-8fe8-b0b1150299d0-kube-api-access-f72k9\") pod \"2ea855ba-523c-4143-8fe8-b0b1150299d0\" (UID: \"2ea855ba-523c-4143-8fe8-b0b1150299d0\") " Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.582682 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea855ba-523c-4143-8fe8-b0b1150299d0-logs" (OuterVolumeSpecName: "logs") pod "2ea855ba-523c-4143-8fe8-b0b1150299d0" (UID: "2ea855ba-523c-4143-8fe8-b0b1150299d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.615224 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea855ba-523c-4143-8fe8-b0b1150299d0-kube-api-access-f72k9" (OuterVolumeSpecName: "kube-api-access-f72k9") pod "2ea855ba-523c-4143-8fe8-b0b1150299d0" (UID: "2ea855ba-523c-4143-8fe8-b0b1150299d0"). InnerVolumeSpecName "kube-api-access-f72k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.624006 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea855ba-523c-4143-8fe8-b0b1150299d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ea855ba-523c-4143-8fe8-b0b1150299d0" (UID: "2ea855ba-523c-4143-8fe8-b0b1150299d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.692742 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f72k9\" (UniqueName: \"kubernetes.io/projected/2ea855ba-523c-4143-8fe8-b0b1150299d0-kube-api-access-f72k9\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.693194 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea855ba-523c-4143-8fe8-b0b1150299d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.693211 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea855ba-523c-4143-8fe8-b0b1150299d0-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.728780 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea855ba-523c-4143-8fe8-b0b1150299d0-config-data" (OuterVolumeSpecName: "config-data") pod "2ea855ba-523c-4143-8fe8-b0b1150299d0" (UID: "2ea855ba-523c-4143-8fe8-b0b1150299d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.762701 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8ftxl" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.795075 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea855ba-523c-4143-8fe8-b0b1150299d0-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.897379 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da5e0166-d811-4dcd-9230-976dd1893c11-operator-scripts\") pod \"da5e0166-d811-4dcd-9230-976dd1893c11\" (UID: \"da5e0166-d811-4dcd-9230-976dd1893c11\") " Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.897756 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxxfl\" (UniqueName: \"kubernetes.io/projected/da5e0166-d811-4dcd-9230-976dd1893c11-kube-api-access-sxxfl\") pod \"da5e0166-d811-4dcd-9230-976dd1893c11\" (UID: \"da5e0166-d811-4dcd-9230-976dd1893c11\") " Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.899157 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5e0166-d811-4dcd-9230-976dd1893c11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da5e0166-d811-4dcd-9230-976dd1893c11" (UID: "da5e0166-d811-4dcd-9230-976dd1893c11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.904502 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da5e0166-d811-4dcd-9230-976dd1893c11-kube-api-access-sxxfl" (OuterVolumeSpecName: "kube-api-access-sxxfl") pod "da5e0166-d811-4dcd-9230-976dd1893c11" (UID: "da5e0166-d811-4dcd-9230-976dd1893c11"). InnerVolumeSpecName "kube-api-access-sxxfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.929140 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nxd5j" Feb 19 15:30:33 crc kubenswrapper[4810]: I0219 15:30:33.947820 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-w2s7h" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.000865 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f04e1699-2be0-4dca-8e4a-73035fde359f-operator-scripts\") pod \"f04e1699-2be0-4dca-8e4a-73035fde359f\" (UID: \"f04e1699-2be0-4dca-8e4a-73035fde359f\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.001188 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-254j4\" (UniqueName: \"kubernetes.io/projected/f04e1699-2be0-4dca-8e4a-73035fde359f-kube-api-access-254j4\") pod \"f04e1699-2be0-4dca-8e4a-73035fde359f\" (UID: \"f04e1699-2be0-4dca-8e4a-73035fde359f\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.001891 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxxfl\" (UniqueName: \"kubernetes.io/projected/da5e0166-d811-4dcd-9230-976dd1893c11-kube-api-access-sxxfl\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.001968 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da5e0166-d811-4dcd-9230-976dd1893c11-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.005491 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f04e1699-2be0-4dca-8e4a-73035fde359f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f04e1699-2be0-4dca-8e4a-73035fde359f" (UID: "f04e1699-2be0-4dca-8e4a-73035fde359f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.005661 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f04e1699-2be0-4dca-8e4a-73035fde359f-kube-api-access-254j4" (OuterVolumeSpecName: "kube-api-access-254j4") pod "f04e1699-2be0-4dca-8e4a-73035fde359f" (UID: "f04e1699-2be0-4dca-8e4a-73035fde359f"). InnerVolumeSpecName "kube-api-access-254j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.007653 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-841f-account-create-update-swd7q" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.031913 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6e67-account-create-update-lk6cv" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.041236 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.103270 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w8q5\" (UniqueName: \"kubernetes.io/projected/d1703853-2754-4348-8c45-dcd98ff5d429-kube-api-access-4w8q5\") pod \"d1703853-2754-4348-8c45-dcd98ff5d429\" (UID: \"d1703853-2754-4348-8c45-dcd98ff5d429\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.103400 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1703853-2754-4348-8c45-dcd98ff5d429-operator-scripts\") pod \"d1703853-2754-4348-8c45-dcd98ff5d429\" (UID: \"d1703853-2754-4348-8c45-dcd98ff5d429\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.103458 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svb2k\" (UniqueName: \"kubernetes.io/projected/987d17ad-1427-4709-b5db-19fbb00e8a7c-kube-api-access-svb2k\") pod \"987d17ad-1427-4709-b5db-19fbb00e8a7c\" (UID: \"987d17ad-1427-4709-b5db-19fbb00e8a7c\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.103523 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/987d17ad-1427-4709-b5db-19fbb00e8a7c-operator-scripts\") pod \"987d17ad-1427-4709-b5db-19fbb00e8a7c\" (UID: \"987d17ad-1427-4709-b5db-19fbb00e8a7c\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.103912 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f04e1699-2be0-4dca-8e4a-73035fde359f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.103928 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-254j4\" (UniqueName: \"kubernetes.io/projected/f04e1699-2be0-4dca-8e4a-73035fde359f-kube-api-access-254j4\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.104161 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1703853-2754-4348-8c45-dcd98ff5d429-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1703853-2754-4348-8c45-dcd98ff5d429" (UID: "d1703853-2754-4348-8c45-dcd98ff5d429"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.104348 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/987d17ad-1427-4709-b5db-19fbb00e8a7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "987d17ad-1427-4709-b5db-19fbb00e8a7c" (UID: "987d17ad-1427-4709-b5db-19fbb00e8a7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.109487 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/987d17ad-1427-4709-b5db-19fbb00e8a7c-kube-api-access-svb2k" (OuterVolumeSpecName: "kube-api-access-svb2k") pod "987d17ad-1427-4709-b5db-19fbb00e8a7c" (UID: "987d17ad-1427-4709-b5db-19fbb00e8a7c"). InnerVolumeSpecName "kube-api-access-svb2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.111151 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1703853-2754-4348-8c45-dcd98ff5d429-kube-api-access-4w8q5" (OuterVolumeSpecName: "kube-api-access-4w8q5") pod "d1703853-2754-4348-8c45-dcd98ff5d429" (UID: "d1703853-2754-4348-8c45-dcd98ff5d429"). InnerVolumeSpecName "kube-api-access-4w8q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.205172 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-public-tls-certs\") pod \"c772672c-c983-42e8-ae77-bfc8484ad555\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.205394 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-config-data\") pod \"c772672c-c983-42e8-ae77-bfc8484ad555\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.205455 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c772672c-c983-42e8-ae77-bfc8484ad555-logs\") pod \"c772672c-c983-42e8-ae77-bfc8484ad555\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.205480 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-combined-ca-bundle\") pod \"c772672c-c983-42e8-ae77-bfc8484ad555\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.205507 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx6gm\" (UniqueName: \"kubernetes.io/projected/c772672c-c983-42e8-ae77-bfc8484ad555-kube-api-access-vx6gm\") pod \"c772672c-c983-42e8-ae77-bfc8484ad555\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.205558 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6d2b\" (UniqueName: \"kubernetes.io/projected/48e0d5d9-1d58-41a5-b740-8c8286edec31-kube-api-access-n6d2b\") pod \"48e0d5d9-1d58-41a5-b740-8c8286edec31\" (UID: \"48e0d5d9-1d58-41a5-b740-8c8286edec31\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.205604 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48e0d5d9-1d58-41a5-b740-8c8286edec31-operator-scripts\") pod \"48e0d5d9-1d58-41a5-b740-8c8286edec31\" (UID: \"48e0d5d9-1d58-41a5-b740-8c8286edec31\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.205636 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-custom-prometheus-ca\") pod \"c772672c-c983-42e8-ae77-bfc8484ad555\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.205681 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-internal-tls-certs\") pod \"c772672c-c983-42e8-ae77-bfc8484ad555\" (UID: \"c772672c-c983-42e8-ae77-bfc8484ad555\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.206167 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1703853-2754-4348-8c45-dcd98ff5d429-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.206194 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svb2k\" (UniqueName: \"kubernetes.io/projected/987d17ad-1427-4709-b5db-19fbb00e8a7c-kube-api-access-svb2k\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.206208 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/987d17ad-1427-4709-b5db-19fbb00e8a7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.206219 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w8q5\" (UniqueName: \"kubernetes.io/projected/d1703853-2754-4348-8c45-dcd98ff5d429-kube-api-access-4w8q5\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.207184 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48e0d5d9-1d58-41a5-b740-8c8286edec31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48e0d5d9-1d58-41a5-b740-8c8286edec31" (UID: "48e0d5d9-1d58-41a5-b740-8c8286edec31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.208378 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c772672c-c983-42e8-ae77-bfc8484ad555-logs" (OuterVolumeSpecName: "logs") pod "c772672c-c983-42e8-ae77-bfc8484ad555" (UID: "c772672c-c983-42e8-ae77-bfc8484ad555"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.224519 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e0d5d9-1d58-41a5-b740-8c8286edec31-kube-api-access-n6d2b" (OuterVolumeSpecName: "kube-api-access-n6d2b") pod "48e0d5d9-1d58-41a5-b740-8c8286edec31" (UID: "48e0d5d9-1d58-41a5-b740-8c8286edec31"). InnerVolumeSpecName "kube-api-access-n6d2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.224854 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c772672c-c983-42e8-ae77-bfc8484ad555-kube-api-access-vx6gm" (OuterVolumeSpecName: "kube-api-access-vx6gm") pod "c772672c-c983-42e8-ae77-bfc8484ad555" (UID: "c772672c-c983-42e8-ae77-bfc8484ad555"). InnerVolumeSpecName "kube-api-access-vx6gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.265113 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c772672c-c983-42e8-ae77-bfc8484ad555" (UID: "c772672c-c983-42e8-ae77-bfc8484ad555"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.270462 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c772672c-c983-42e8-ae77-bfc8484ad555" (UID: "c772672c-c983-42e8-ae77-bfc8484ad555"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.279480 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c772672c-c983-42e8-ae77-bfc8484ad555" (UID: "c772672c-c983-42e8-ae77-bfc8484ad555"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.279830 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-config-data" (OuterVolumeSpecName: "config-data") pod "c772672c-c983-42e8-ae77-bfc8484ad555" (UID: "c772672c-c983-42e8-ae77-bfc8484ad555"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.302861 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c772672c-c983-42e8-ae77-bfc8484ad555" (UID: "c772672c-c983-42e8-ae77-bfc8484ad555"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.308251 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.308277 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c772672c-c983-42e8-ae77-bfc8484ad555-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.308287 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.308299 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx6gm\" (UniqueName: \"kubernetes.io/projected/c772672c-c983-42e8-ae77-bfc8484ad555-kube-api-access-vx6gm\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.308308 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6d2b\" (UniqueName: \"kubernetes.io/projected/48e0d5d9-1d58-41a5-b740-8c8286edec31-kube-api-access-n6d2b\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.308317 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48e0d5d9-1d58-41a5-b740-8c8286edec31-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.308346 4810 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.308356 4810 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.308363 4810 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c772672c-c983-42e8-ae77-bfc8484ad555-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.338877 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"2ea855ba-523c-4143-8fe8-b0b1150299d0","Type":"ContainerDied","Data":"1d01a4c95612c0c4f1b7f9b7042052db9fc19c8db413469f648bf9735bce00e6"} Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.338917 4810 scope.go:117] "RemoveContainer" containerID="9386dd3b59b24a748e770d6384d92f3e8aff8a701badb29067310dec0fb2fbb8" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.339023 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.358531 4810 generic.go:334] "Generic (PLEG): container finished" podID="c772672c-c983-42e8-ae77-bfc8484ad555" containerID="d9c5f503f212d6aa9b0ddd5ddc6ff440b6c542c6960be28ea86ea203ab7ca92d" exitCode=0 Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.358668 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.360540 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c772672c-c983-42e8-ae77-bfc8484ad555","Type":"ContainerDied","Data":"d9c5f503f212d6aa9b0ddd5ddc6ff440b6c542c6960be28ea86ea203ab7ca92d"} Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.360581 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c772672c-c983-42e8-ae77-bfc8484ad555","Type":"ContainerDied","Data":"e3006b677518772d18ede2c0df9a671f7e5d00f39c12f380e32799bbd51a8cab"} Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.367509 4810 generic.go:334] "Generic (PLEG): container finished" podID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerID="4e69a7ec2d6ffef7e8de5181b7cb6f418ce564ddea149f05d15804b56bd3283e" exitCode=0 Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.367563 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5","Type":"ContainerDied","Data":"4e69a7ec2d6ffef7e8de5181b7cb6f418ce564ddea149f05d15804b56bd3283e"} Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.370035 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6e67-account-create-update-lk6cv" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.370063 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6e67-account-create-update-lk6cv" event={"ID":"48e0d5d9-1d58-41a5-b740-8c8286edec31","Type":"ContainerDied","Data":"9a0a850973879b8c16a53cdf917d4a3ec51f5d1f0bea3d2e4973de66d44880c7"} Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.370095 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a0a850973879b8c16a53cdf917d4a3ec51f5d1f0bea3d2e4973de66d44880c7" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.371485 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nxd5j" event={"ID":"f04e1699-2be0-4dca-8e4a-73035fde359f","Type":"ContainerDied","Data":"19ce66453832709a411387f21a96253dcdb5acebfcdfa40ee91e2d5e02a077dc"} Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.371510 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19ce66453832709a411387f21a96253dcdb5acebfcdfa40ee91e2d5e02a077dc" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.371546 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nxd5j" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.375737 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-w2s7h" event={"ID":"987d17ad-1427-4709-b5db-19fbb00e8a7c","Type":"ContainerDied","Data":"8653c17c2feefd4ce9a0d05b4110170b6e33cc6884775621ff1b3f7d64b78a17"} Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.375814 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8653c17c2feefd4ce9a0d05b4110170b6e33cc6884775621ff1b3f7d64b78a17" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.375844 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-w2s7h" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.376789 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8ftxl" event={"ID":"da5e0166-d811-4dcd-9230-976dd1893c11","Type":"ContainerDied","Data":"ea790df5d5f36cd5c53b9c8735a8d922caa18ea278fcafd7ab38eca09a4e4d29"} Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.376809 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea790df5d5f36cd5c53b9c8735a8d922caa18ea278fcafd7ab38eca09a4e4d29" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.376860 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8ftxl" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.380323 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-841f-account-create-update-swd7q" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.380371 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-841f-account-create-update-swd7q" event={"ID":"d1703853-2754-4348-8c45-dcd98ff5d429","Type":"ContainerDied","Data":"8b87b6cef7dd1cc9076c484a82249ed624df4e78c547c11a7768a92577f632cf"} Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.380392 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b87b6cef7dd1cc9076c484a82249ed624df4e78c547c11a7768a92577f632cf" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.520481 4810 scope.go:117] "RemoveContainer" containerID="d9c5f503f212d6aa9b0ddd5ddc6ff440b6c542c6960be28ea86ea203ab7ca92d" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.522968 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.553401 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.563877 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.585058 4810 scope.go:117] "RemoveContainer" containerID="a3586f026120011072c0978ee9277dd85cce4dc7aaed51f25a7bea5aff2ae758" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.585195 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.628690 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:30:34 crc kubenswrapper[4810]: E0219 15:30:34.629114 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1699-2be0-4dca-8e4a-73035fde359f" containerName="mariadb-database-create" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629126 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1699-2be0-4dca-8e4a-73035fde359f" containerName="mariadb-database-create" Feb 19 15:30:34 crc kubenswrapper[4810]: E0219 15:30:34.629139 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5e0166-d811-4dcd-9230-976dd1893c11" containerName="mariadb-database-create" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629145 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5e0166-d811-4dcd-9230-976dd1893c11" containerName="mariadb-database-create" Feb 19 15:30:34 crc kubenswrapper[4810]: E0219 15:30:34.629152 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c772672c-c983-42e8-ae77-bfc8484ad555" containerName="watcher-api-log" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629158 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c772672c-c983-42e8-ae77-bfc8484ad555" containerName="watcher-api-log" Feb 19 15:30:34 crc kubenswrapper[4810]: E0219 15:30:34.629167 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c772672c-c983-42e8-ae77-bfc8484ad555" containerName="watcher-api" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629174 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c772672c-c983-42e8-ae77-bfc8484ad555" containerName="watcher-api" Feb 19 15:30:34 crc kubenswrapper[4810]: E0219 15:30:34.629190 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea855ba-523c-4143-8fe8-b0b1150299d0" containerName="watcher-applier" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629196 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea855ba-523c-4143-8fe8-b0b1150299d0" containerName="watcher-applier" Feb 19 15:30:34 crc kubenswrapper[4810]: E0219 15:30:34.629212 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="987d17ad-1427-4709-b5db-19fbb00e8a7c" containerName="mariadb-database-create" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629217 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="987d17ad-1427-4709-b5db-19fbb00e8a7c" containerName="mariadb-database-create" Feb 19 15:30:34 crc kubenswrapper[4810]: E0219 15:30:34.629229 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e0d5d9-1d58-41a5-b740-8c8286edec31" containerName="mariadb-account-create-update" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629235 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e0d5d9-1d58-41a5-b740-8c8286edec31" containerName="mariadb-account-create-update" Feb 19 15:30:34 crc kubenswrapper[4810]: E0219 15:30:34.629246 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1703853-2754-4348-8c45-dcd98ff5d429" containerName="mariadb-account-create-update" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629252 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1703853-2754-4348-8c45-dcd98ff5d429" containerName="mariadb-account-create-update" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629537 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c772672c-c983-42e8-ae77-bfc8484ad555" containerName="watcher-api" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629561 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="987d17ad-1427-4709-b5db-19fbb00e8a7c" containerName="mariadb-database-create" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629572 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="48e0d5d9-1d58-41a5-b740-8c8286edec31" containerName="mariadb-account-create-update" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629580 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c772672c-c983-42e8-ae77-bfc8484ad555" containerName="watcher-api-log" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629587 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1703853-2754-4348-8c45-dcd98ff5d429" containerName="mariadb-account-create-update" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629597 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5e0166-d811-4dcd-9230-976dd1893c11" containerName="mariadb-database-create" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629604 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea855ba-523c-4143-8fe8-b0b1150299d0" containerName="watcher-applier" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.629615 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1699-2be0-4dca-8e4a-73035fde359f" containerName="mariadb-database-create" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.630631 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.638116 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.638302 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.646061 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.663550 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.688495 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.695876 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.700237 4810 scope.go:117] "RemoveContainer" containerID="d9c5f503f212d6aa9b0ddd5ddc6ff440b6c542c6960be28ea86ea203ab7ca92d" Feb 19 15:30:34 crc kubenswrapper[4810]: E0219 15:30:34.703188 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9c5f503f212d6aa9b0ddd5ddc6ff440b6c542c6960be28ea86ea203ab7ca92d\": container with ID starting with d9c5f503f212d6aa9b0ddd5ddc6ff440b6c542c6960be28ea86ea203ab7ca92d not found: ID does not exist" containerID="d9c5f503f212d6aa9b0ddd5ddc6ff440b6c542c6960be28ea86ea203ab7ca92d" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.703236 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c5f503f212d6aa9b0ddd5ddc6ff440b6c542c6960be28ea86ea203ab7ca92d"} err="failed to get container status \"d9c5f503f212d6aa9b0ddd5ddc6ff440b6c542c6960be28ea86ea203ab7ca92d\": rpc error: code = NotFound desc = could not find container \"d9c5f503f212d6aa9b0ddd5ddc6ff440b6c542c6960be28ea86ea203ab7ca92d\": container with ID starting with d9c5f503f212d6aa9b0ddd5ddc6ff440b6c542c6960be28ea86ea203ab7ca92d not found: ID does not exist" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.703264 4810 scope.go:117] "RemoveContainer" containerID="a3586f026120011072c0978ee9277dd85cce4dc7aaed51f25a7bea5aff2ae758" Feb 19 15:30:34 crc kubenswrapper[4810]: E0219 15:30:34.704360 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3586f026120011072c0978ee9277dd85cce4dc7aaed51f25a7bea5aff2ae758\": container with ID starting with a3586f026120011072c0978ee9277dd85cce4dc7aaed51f25a7bea5aff2ae758 not found: ID does not exist" containerID="a3586f026120011072c0978ee9277dd85cce4dc7aaed51f25a7bea5aff2ae758" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.704383 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3586f026120011072c0978ee9277dd85cce4dc7aaed51f25a7bea5aff2ae758"} err="failed to get container status \"a3586f026120011072c0978ee9277dd85cce4dc7aaed51f25a7bea5aff2ae758\": rpc error: code = NotFound desc = could not find container \"a3586f026120011072c0978ee9277dd85cce4dc7aaed51f25a7bea5aff2ae758\": container with ID starting with a3586f026120011072c0978ee9277dd85cce4dc7aaed51f25a7bea5aff2ae758 not found: ID does not exist" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.704395 4810 scope.go:117] "RemoveContainer" containerID="a8232fee767eaee3fa026223e20673e25fa6d954ddb913e148bf6e0e435de416" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.722147 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11a7f60-4839-44aa-8615-98de657221f4-logs\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.722191 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f3ef20-3f3d-4fa2-8888-36d421303dfd-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"79f3ef20-3f3d-4fa2-8888-36d421303dfd\") " pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.722237 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.722266 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-public-tls-certs\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.722287 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f3ef20-3f3d-4fa2-8888-36d421303dfd-logs\") pod \"watcher-applier-0\" (UID: \"79f3ef20-3f3d-4fa2-8888-36d421303dfd\") " pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.722308 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.722355 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jq8p\" (UniqueName: \"kubernetes.io/projected/79f3ef20-3f3d-4fa2-8888-36d421303dfd-kube-api-access-4jq8p\") pod \"watcher-applier-0\" (UID: \"79f3ef20-3f3d-4fa2-8888-36d421303dfd\") " pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.722401 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5kzq\" (UniqueName: \"kubernetes.io/projected/c11a7f60-4839-44aa-8615-98de657221f4-kube-api-access-h5kzq\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.722428 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.722466 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-config-data\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.722489 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f3ef20-3f3d-4fa2-8888-36d421303dfd-config-data\") pod \"watcher-applier-0\" (UID: \"79f3ef20-3f3d-4fa2-8888-36d421303dfd\") " pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.733792 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.749371 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.780559 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.824418 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-config-data\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.824486 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f3ef20-3f3d-4fa2-8888-36d421303dfd-config-data\") pod \"watcher-applier-0\" (UID: \"79f3ef20-3f3d-4fa2-8888-36d421303dfd\") " pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.824533 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11a7f60-4839-44aa-8615-98de657221f4-logs\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.824557 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f3ef20-3f3d-4fa2-8888-36d421303dfd-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"79f3ef20-3f3d-4fa2-8888-36d421303dfd\") " pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.824594 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.824619 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-public-tls-certs\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.824635 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f3ef20-3f3d-4fa2-8888-36d421303dfd-logs\") pod \"watcher-applier-0\" (UID: \"79f3ef20-3f3d-4fa2-8888-36d421303dfd\") " pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.824655 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.824672 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jq8p\" (UniqueName: \"kubernetes.io/projected/79f3ef20-3f3d-4fa2-8888-36d421303dfd-kube-api-access-4jq8p\") pod \"watcher-applier-0\" (UID: \"79f3ef20-3f3d-4fa2-8888-36d421303dfd\") " pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.824713 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5kzq\" (UniqueName: \"kubernetes.io/projected/c11a7f60-4839-44aa-8615-98de657221f4-kube-api-access-h5kzq\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.824736 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.830842 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f3ef20-3f3d-4fa2-8888-36d421303dfd-config-data\") pod \"watcher-applier-0\" (UID: \"79f3ef20-3f3d-4fa2-8888-36d421303dfd\") " pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.831154 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11a7f60-4839-44aa-8615-98de657221f4-logs\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.838288 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f3ef20-3f3d-4fa2-8888-36d421303dfd-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"79f3ef20-3f3d-4fa2-8888-36d421303dfd\") " pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.844761 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-public-tls-certs\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.845501 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.845566 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f3ef20-3f3d-4fa2-8888-36d421303dfd-logs\") pod \"watcher-applier-0\" (UID: \"79f3ef20-3f3d-4fa2-8888-36d421303dfd\") " pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.849018 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.852928 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.857731 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11a7f60-4839-44aa-8615-98de657221f4-config-data\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.861953 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5kzq\" (UniqueName: \"kubernetes.io/projected/c11a7f60-4839-44aa-8615-98de657221f4-kube-api-access-h5kzq\") pod \"watcher-api-0\" (UID: \"c11a7f60-4839-44aa-8615-98de657221f4\") " pod="openstack/watcher-api-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.870194 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jq8p\" (UniqueName: \"kubernetes.io/projected/79f3ef20-3f3d-4fa2-8888-36d421303dfd-kube-api-access-4jq8p\") pod \"watcher-applier-0\" (UID: \"79f3ef20-3f3d-4fa2-8888-36d421303dfd\") " pod="openstack/watcher-applier-0" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.926047 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-config-data\") pod \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.926108 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bwsx\" (UniqueName: \"kubernetes.io/projected/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-kube-api-access-2bwsx\") pod \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.926232 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-combined-ca-bundle\") pod \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.926260 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-logs\") pod \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.926278 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-custom-prometheus-ca\") pod \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\" (UID: \"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5\") " Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.927287 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-logs" (OuterVolumeSpecName: "logs") pod "3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" (UID: "3eb2dccd-c5dc-436f-b7a6-954af7bc51c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.929645 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-kube-api-access-2bwsx" (OuterVolumeSpecName: "kube-api-access-2bwsx") pod "3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" (UID: "3eb2dccd-c5dc-436f-b7a6-954af7bc51c5"). InnerVolumeSpecName "kube-api-access-2bwsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.937508 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da99-account-create-update-4j7hb" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.964875 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" (UID: "3eb2dccd-c5dc-436f-b7a6-954af7bc51c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:34 crc kubenswrapper[4810]: I0219 15:30:34.983527 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" (UID: "3eb2dccd-c5dc-436f-b7a6-954af7bc51c5"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.027358 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.028566 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.028588 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.028598 4810 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.028607 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bwsx\" (UniqueName: \"kubernetes.io/projected/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-kube-api-access-2bwsx\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.041475 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-config-data" (OuterVolumeSpecName: "config-data") pod "3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" (UID: "3eb2dccd-c5dc-436f-b7a6-954af7bc51c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.041917 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.129508 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t982\" (UniqueName: \"kubernetes.io/projected/1f19eb06-d11c-409b-8b7e-516c9a5db815-kube-api-access-9t982\") pod \"1f19eb06-d11c-409b-8b7e-516c9a5db815\" (UID: \"1f19eb06-d11c-409b-8b7e-516c9a5db815\") " Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.129680 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f19eb06-d11c-409b-8b7e-516c9a5db815-operator-scripts\") pod \"1f19eb06-d11c-409b-8b7e-516c9a5db815\" (UID: \"1f19eb06-d11c-409b-8b7e-516c9a5db815\") " Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.130143 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.130431 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f19eb06-d11c-409b-8b7e-516c9a5db815-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f19eb06-d11c-409b-8b7e-516c9a5db815" (UID: "1f19eb06-d11c-409b-8b7e-516c9a5db815"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.133537 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f19eb06-d11c-409b-8b7e-516c9a5db815-kube-api-access-9t982" (OuterVolumeSpecName: "kube-api-access-9t982") pod "1f19eb06-d11c-409b-8b7e-516c9a5db815" (UID: "1f19eb06-d11c-409b-8b7e-516c9a5db815"). InnerVolumeSpecName "kube-api-access-9t982". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.231691 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t982\" (UniqueName: \"kubernetes.io/projected/1f19eb06-d11c-409b-8b7e-516c9a5db815-kube-api-access-9t982\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.231738 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f19eb06-d11c-409b-8b7e-516c9a5db815-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.404139 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.404169 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3eb2dccd-c5dc-436f-b7a6-954af7bc51c5","Type":"ContainerDied","Data":"3750609201e07e960ce122b5fe6baad963df212daffe611a1c8ba29e4bf01f7a"} Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.404554 4810 scope.go:117] "RemoveContainer" containerID="4e69a7ec2d6ffef7e8de5181b7cb6f418ce564ddea149f05d15804b56bd3283e" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.408014 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da99-account-create-update-4j7hb" event={"ID":"1f19eb06-d11c-409b-8b7e-516c9a5db815","Type":"ContainerDied","Data":"5b4c6c0a8687bad9f0c8df95b6b32fefcb3920c41eb3bf6e882a708592401faf"} Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.408053 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b4c6c0a8687bad9f0c8df95b6b32fefcb3920c41eb3bf6e882a708592401faf" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.408090 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da99-account-create-update-4j7hb" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.456462 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea855ba-523c-4143-8fe8-b0b1150299d0" path="/var/lib/kubelet/pods/2ea855ba-523c-4143-8fe8-b0b1150299d0/volumes" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.457063 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c772672c-c983-42e8-ae77-bfc8484ad555" path="/var/lib/kubelet/pods/c772672c-c983-42e8-ae77-bfc8484ad555/volumes" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.457599 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.490414 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.504961 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:30:35 crc kubenswrapper[4810]: E0219 15:30:35.505475 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.505511 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:30:35 crc kubenswrapper[4810]: E0219 15:30:35.505528 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f19eb06-d11c-409b-8b7e-516c9a5db815" containerName="mariadb-account-create-update" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.505535 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f19eb06-d11c-409b-8b7e-516c9a5db815" containerName="mariadb-account-create-update" Feb 19 15:30:35 crc kubenswrapper[4810]: E0219 15:30:35.505557 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.505565 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:30:35 crc kubenswrapper[4810]: E0219 15:30:35.505597 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.505605 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.505833 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.505861 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.505874 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f19eb06-d11c-409b-8b7e-516c9a5db815" containerName="mariadb-account-create-update" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.506846 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.510245 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.514995 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.541489 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.589926 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.639110 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.639156 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.639200 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31448de0-cbd5-4d71-8107-881c0327fb55-logs\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.639237 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnrtj\" (UniqueName: \"kubernetes.io/projected/31448de0-cbd5-4d71-8107-881c0327fb55-kube-api-access-qnrtj\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.639489 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-config-data\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.741614 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-config-data\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.742516 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.742679 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.742806 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31448de0-cbd5-4d71-8107-881c0327fb55-logs\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.742919 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnrtj\" (UniqueName: \"kubernetes.io/projected/31448de0-cbd5-4d71-8107-881c0327fb55-kube-api-access-qnrtj\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.743318 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31448de0-cbd5-4d71-8107-881c0327fb55-logs\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.745667 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.746605 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-config-data\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.748866 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.761398 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnrtj\" (UniqueName: \"kubernetes.io/projected/31448de0-cbd5-4d71-8107-881c0327fb55-kube-api-access-qnrtj\") pod \"watcher-decision-engine-0\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:35 crc kubenswrapper[4810]: I0219 15:30:35.827000 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:36 crc kubenswrapper[4810]: I0219 15:30:36.308133 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:30:36 crc kubenswrapper[4810]: I0219 15:30:36.430007 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"79f3ef20-3f3d-4fa2-8888-36d421303dfd","Type":"ContainerStarted","Data":"89b2ac85fe989b6715854d867a0b9e4dbc7254cda52ee4efe559c2eceddac7c9"} Feb 19 15:30:36 crc kubenswrapper[4810]: I0219 15:30:36.430063 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"79f3ef20-3f3d-4fa2-8888-36d421303dfd","Type":"ContainerStarted","Data":"fbdf32c335aa6dc04fc51b2cdbf0bfd8d8d90e8499fd480593d6ebb30114ca65"} Feb 19 15:30:36 crc kubenswrapper[4810]: I0219 15:30:36.433296 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"31448de0-cbd5-4d71-8107-881c0327fb55","Type":"ContainerStarted","Data":"8b0e5903dccd9d2e5e23e46710ab47f5e2c1b9dea5ccf431992056feeec7f78e"} Feb 19 15:30:36 crc kubenswrapper[4810]: I0219 15:30:36.439142 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c11a7f60-4839-44aa-8615-98de657221f4","Type":"ContainerStarted","Data":"a74fdb1f295bf98e0b6a7c00cef5e765a9ec2b3b3bb83e3346c4f8f459e1c631"} Feb 19 15:30:36 crc kubenswrapper[4810]: I0219 15:30:36.439182 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c11a7f60-4839-44aa-8615-98de657221f4","Type":"ContainerStarted","Data":"be01de1818de444acfe4af8cd81669a085babf9fa9bb7f4d2487532346267f24"} Feb 19 15:30:36 crc kubenswrapper[4810]: I0219 15:30:36.439191 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c11a7f60-4839-44aa-8615-98de657221f4","Type":"ContainerStarted","Data":"92ad3e8c6d58735d2e5a96f379c49b8251db2cb796138e8fb3d05573e9bc6628"} Feb 19 15:30:36 crc kubenswrapper[4810]: I0219 15:30:36.439442 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 15:30:36 crc kubenswrapper[4810]: I0219 15:30:36.454246 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.454231083 podStartE2EDuration="2.454231083s" podCreationTimestamp="2026-02-19 15:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:30:36.449464336 +0000 UTC m=+1265.931494460" watchObservedRunningTime="2026-02-19 15:30:36.454231083 +0000 UTC m=+1265.936261207" Feb 19 15:30:36 crc kubenswrapper[4810]: I0219 15:30:36.470557 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.470538213 podStartE2EDuration="2.470538213s" podCreationTimestamp="2026-02-19 15:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:30:36.469754944 +0000 UTC m=+1265.951785078" watchObservedRunningTime="2026-02-19 15:30:36.470538213 +0000 UTC m=+1265.952568337" Feb 19 15:30:37 crc kubenswrapper[4810]: I0219 15:30:37.452884 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" path="/var/lib/kubelet/pods/3eb2dccd-c5dc-436f-b7a6-954af7bc51c5/volumes" Feb 19 15:30:37 crc kubenswrapper[4810]: I0219 15:30:37.455764 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"31448de0-cbd5-4d71-8107-881c0327fb55","Type":"ContainerStarted","Data":"64d0d64a5da470f2a5ddc1de4e8c04b14ee710ccd412090965ee7e074e161cc8"} Feb 19 15:30:37 crc kubenswrapper[4810]: I0219 15:30:37.467923 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.467905401 podStartE2EDuration="2.467905401s" podCreationTimestamp="2026-02-19 15:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:30:37.467795758 +0000 UTC m=+1266.949825872" watchObservedRunningTime="2026-02-19 15:30:37.467905401 +0000 UTC m=+1266.949935535" Feb 19 15:30:38 crc kubenswrapper[4810]: I0219 15:30:38.869124 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.491892 4810 generic.go:334] "Generic (PLEG): container finished" podID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerID="922169bd474755c022d1869b56b5508937a05f190fe4b150d2505095771f9d93" exitCode=0 Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.491937 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e03c99b3-d5d6-479a-9b45-045bda62be1e","Type":"ContainerDied","Data":"922169bd474755c022d1869b56b5508937a05f190fe4b150d2505095771f9d93"} Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.617100 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.740581 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vpjk\" (UniqueName: \"kubernetes.io/projected/e03c99b3-d5d6-479a-9b45-045bda62be1e-kube-api-access-8vpjk\") pod \"e03c99b3-d5d6-479a-9b45-045bda62be1e\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.741063 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-combined-ca-bundle\") pod \"e03c99b3-d5d6-479a-9b45-045bda62be1e\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.741141 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-sg-core-conf-yaml\") pod \"e03c99b3-d5d6-479a-9b45-045bda62be1e\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.741217 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e03c99b3-d5d6-479a-9b45-045bda62be1e-run-httpd\") pod \"e03c99b3-d5d6-479a-9b45-045bda62be1e\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.741353 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-config-data\") pod \"e03c99b3-d5d6-479a-9b45-045bda62be1e\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.741417 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-scripts\") pod \"e03c99b3-d5d6-479a-9b45-045bda62be1e\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.741500 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e03c99b3-d5d6-479a-9b45-045bda62be1e-log-httpd\") pod \"e03c99b3-d5d6-479a-9b45-045bda62be1e\" (UID: \"e03c99b3-d5d6-479a-9b45-045bda62be1e\") " Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.742380 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e03c99b3-d5d6-479a-9b45-045bda62be1e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e03c99b3-d5d6-479a-9b45-045bda62be1e" (UID: "e03c99b3-d5d6-479a-9b45-045bda62be1e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.742958 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e03c99b3-d5d6-479a-9b45-045bda62be1e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e03c99b3-d5d6-479a-9b45-045bda62be1e" (UID: "e03c99b3-d5d6-479a-9b45-045bda62be1e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.760064 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e03c99b3-d5d6-479a-9b45-045bda62be1e-kube-api-access-8vpjk" (OuterVolumeSpecName: "kube-api-access-8vpjk") pod "e03c99b3-d5d6-479a-9b45-045bda62be1e" (UID: "e03c99b3-d5d6-479a-9b45-045bda62be1e"). InnerVolumeSpecName "kube-api-access-8vpjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.764683 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-scripts" (OuterVolumeSpecName: "scripts") pod "e03c99b3-d5d6-479a-9b45-045bda62be1e" (UID: "e03c99b3-d5d6-479a-9b45-045bda62be1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.781573 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e03c99b3-d5d6-479a-9b45-045bda62be1e" (UID: "e03c99b3-d5d6-479a-9b45-045bda62be1e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.838099 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e03c99b3-d5d6-479a-9b45-045bda62be1e" (UID: "e03c99b3-d5d6-479a-9b45-045bda62be1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.843524 4810 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e03c99b3-d5d6-479a-9b45-045bda62be1e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.843790 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vpjk\" (UniqueName: \"kubernetes.io/projected/e03c99b3-d5d6-479a-9b45-045bda62be1e-kube-api-access-8vpjk\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.843867 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.843924 4810 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.843986 4810 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e03c99b3-d5d6-479a-9b45-045bda62be1e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.844042 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.875622 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-config-data" (OuterVolumeSpecName: "config-data") pod "e03c99b3-d5d6-479a-9b45-045bda62be1e" (UID: "e03c99b3-d5d6-479a-9b45-045bda62be1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:39 crc kubenswrapper[4810]: I0219 15:30:39.946105 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e03c99b3-d5d6-479a-9b45-045bda62be1e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.028582 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.042351 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.317449 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-58xq9"] Feb 19 15:30:40 crc kubenswrapper[4810]: E0219 15:30:40.318007 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.318084 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:30:40 crc kubenswrapper[4810]: E0219 15:30:40.318144 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="proxy-httpd" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.318193 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="proxy-httpd" Feb 19 15:30:40 crc kubenswrapper[4810]: E0219 15:30:40.318253 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="ceilometer-central-agent" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.318311 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="ceilometer-central-agent" Feb 19 15:30:40 crc kubenswrapper[4810]: E0219 15:30:40.318399 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="ceilometer-notification-agent" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.318656 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="ceilometer-notification-agent" Feb 19 15:30:40 crc kubenswrapper[4810]: E0219 15:30:40.318760 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="sg-core" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.318812 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="sg-core" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.319054 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.319123 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="ceilometer-central-agent" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.319182 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="proxy-httpd" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.319233 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="sg-core" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.319285 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb2dccd-c5dc-436f-b7a6-954af7bc51c5" containerName="watcher-decision-engine" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.319363 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" containerName="ceilometer-notification-agent" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.320006 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.326272 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.326439 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.329140 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nqhd6" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.333449 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-58xq9"] Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.454695 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-58xq9\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.454810 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-config-data\") pod \"nova-cell0-conductor-db-sync-58xq9\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.454983 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-scripts\") pod \"nova-cell0-conductor-db-sync-58xq9\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.455113 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75crr\" (UniqueName: \"kubernetes.io/projected/972d6f5e-3edf-4b6e-bdde-39c580caea31-kube-api-access-75crr\") pod \"nova-cell0-conductor-db-sync-58xq9\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.503948 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e03c99b3-d5d6-479a-9b45-045bda62be1e","Type":"ContainerDied","Data":"28190f8c14a28d468a745076cbf90c346864c9199f363697f039ea4c27bb2c7f"} Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.503996 4810 scope.go:117] "RemoveContainer" containerID="d0173ca27d32542d93975f960f3c985b101c2043974bb22943e94cfa2c3990e5" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.504041 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.532777 4810 scope.go:117] "RemoveContainer" containerID="75f4bc2020206ce4f5c18d99d5247f762fe2e1e5470498d43115d0a1b1bc5184" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.535090 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.544642 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.554765 4810 scope.go:117] "RemoveContainer" containerID="2d5f4181bcf65b08449001da5a939c392bf7453a34f071c7f2e2b29a55dbc3c9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.556419 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75crr\" (UniqueName: \"kubernetes.io/projected/972d6f5e-3edf-4b6e-bdde-39c580caea31-kube-api-access-75crr\") pod \"nova-cell0-conductor-db-sync-58xq9\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.556533 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-58xq9\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.556600 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-config-data\") pod \"nova-cell0-conductor-db-sync-58xq9\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.556654 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-scripts\") pod \"nova-cell0-conductor-db-sync-58xq9\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.561868 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-config-data\") pod \"nova-cell0-conductor-db-sync-58xq9\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.565037 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-58xq9\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.589281 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75crr\" (UniqueName: \"kubernetes.io/projected/972d6f5e-3edf-4b6e-bdde-39c580caea31-kube-api-access-75crr\") pod \"nova-cell0-conductor-db-sync-58xq9\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.590391 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.593312 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.597119 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-scripts\") pod \"nova-cell0-conductor-db-sync-58xq9\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.599096 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.599736 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.610753 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.652506 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.706117 4810 scope.go:117] "RemoveContainer" containerID="922169bd474755c022d1869b56b5508937a05f190fe4b150d2505095771f9d93" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.762838 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxthg\" (UniqueName: \"kubernetes.io/projected/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-kube-api-access-jxthg\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.763303 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-log-httpd\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.763409 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-run-httpd\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.763525 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-scripts\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.763596 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-config-data\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.763732 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.763840 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.874098 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.874161 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.874223 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxthg\" (UniqueName: \"kubernetes.io/projected/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-kube-api-access-jxthg\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.874245 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-log-httpd\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.874272 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-run-httpd\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.874301 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-scripts\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.874315 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-config-data\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.879811 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-log-httpd\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.880098 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-run-httpd\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.881107 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.886815 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.895161 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-scripts\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.900104 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxthg\" (UniqueName: \"kubernetes.io/projected/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-kube-api-access-jxthg\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.960943 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-config-data\") pod \"ceilometer-0\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " pod="openstack/ceilometer-0" Feb 19 15:30:40 crc kubenswrapper[4810]: I0219 15:30:40.993642 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:30:41 crc kubenswrapper[4810]: I0219 15:30:41.361616 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-58xq9"] Feb 19 15:30:41 crc kubenswrapper[4810]: I0219 15:30:41.449766 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e03c99b3-d5d6-479a-9b45-045bda62be1e" path="/var/lib/kubelet/pods/e03c99b3-d5d6-479a-9b45-045bda62be1e/volumes" Feb 19 15:30:41 crc kubenswrapper[4810]: I0219 15:30:41.518532 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-58xq9" event={"ID":"972d6f5e-3edf-4b6e-bdde-39c580caea31","Type":"ContainerStarted","Data":"0c01e86e8bf881c49ad9ca06e89f46142a0a38c81d0b0f4cbfef925f99badde4"} Feb 19 15:30:41 crc kubenswrapper[4810]: I0219 15:30:41.557264 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:30:41 crc kubenswrapper[4810]: W0219 15:30:41.564907 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbe979b4_aa62_4c1f_8329_6eb2ae2cfa38.slice/crio-4ff3102af0db331be9ee6d83e5b6fe8b5c513c45a0e210d2e6a0f658eeddde5d WatchSource:0}: Error finding container 4ff3102af0db331be9ee6d83e5b6fe8b5c513c45a0e210d2e6a0f658eeddde5d: Status 404 returned error can't find the container with id 4ff3102af0db331be9ee6d83e5b6fe8b5c513c45a0e210d2e6a0f658eeddde5d Feb 19 15:30:42 crc kubenswrapper[4810]: I0219 15:30:42.558397 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38","Type":"ContainerStarted","Data":"fe9552106d30d58d9eba25443c982dcd127f8bde0f0223d82b19cb349c04ea05"} Feb 19 15:30:42 crc kubenswrapper[4810]: I0219 15:30:42.558809 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38","Type":"ContainerStarted","Data":"2271728f7abc2e8188f15e24a952eadeafff700ee538da3ec50e4feeabb89491"} Feb 19 15:30:42 crc kubenswrapper[4810]: I0219 15:30:42.558825 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38","Type":"ContainerStarted","Data":"4ff3102af0db331be9ee6d83e5b6fe8b5c513c45a0e210d2e6a0f658eeddde5d"} Feb 19 15:30:43 crc kubenswrapper[4810]: I0219 15:30:43.573548 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38","Type":"ContainerStarted","Data":"b5e72355bab674a636cae3f88e4f871dcc819f970a52aa01b409a42a1d43b5a1"} Feb 19 15:30:45 crc kubenswrapper[4810]: I0219 15:30:45.029518 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 19 15:30:45 crc kubenswrapper[4810]: I0219 15:30:45.039444 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 19 15:30:45 crc kubenswrapper[4810]: I0219 15:30:45.043388 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 19 15:30:45 crc kubenswrapper[4810]: I0219 15:30:45.085227 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 19 15:30:45 crc kubenswrapper[4810]: I0219 15:30:45.601207 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 15:30:45 crc kubenswrapper[4810]: I0219 15:30:45.641188 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 19 15:30:45 crc kubenswrapper[4810]: I0219 15:30:45.827747 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:45 crc kubenswrapper[4810]: I0219 15:30:45.860944 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:46 crc kubenswrapper[4810]: I0219 15:30:46.601129 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:46 crc kubenswrapper[4810]: I0219 15:30:46.722642 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:46 crc kubenswrapper[4810]: I0219 15:30:46.771480 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:30:48 crc kubenswrapper[4810]: I0219 15:30:48.621669 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="31448de0-cbd5-4d71-8107-881c0327fb55" containerName="watcher-decision-engine" containerID="cri-o://64d0d64a5da470f2a5ddc1de4e8c04b14ee710ccd412090965ee7e074e161cc8" gracePeriod=30 Feb 19 15:30:49 crc kubenswrapper[4810]: I0219 15:30:49.652936 4810 generic.go:334] "Generic (PLEG): container finished" podID="31448de0-cbd5-4d71-8107-881c0327fb55" containerID="64d0d64a5da470f2a5ddc1de4e8c04b14ee710ccd412090965ee7e074e161cc8" exitCode=0 Feb 19 15:30:49 crc kubenswrapper[4810]: I0219 15:30:49.653006 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"31448de0-cbd5-4d71-8107-881c0327fb55","Type":"ContainerDied","Data":"64d0d64a5da470f2a5ddc1de4e8c04b14ee710ccd412090965ee7e074e161cc8"} Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.533400 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.588907 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnrtj\" (UniqueName: \"kubernetes.io/projected/31448de0-cbd5-4d71-8107-881c0327fb55-kube-api-access-qnrtj\") pod \"31448de0-cbd5-4d71-8107-881c0327fb55\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.589283 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31448de0-cbd5-4d71-8107-881c0327fb55-logs\") pod \"31448de0-cbd5-4d71-8107-881c0327fb55\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.589413 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-config-data\") pod \"31448de0-cbd5-4d71-8107-881c0327fb55\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.589454 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-custom-prometheus-ca\") pod \"31448de0-cbd5-4d71-8107-881c0327fb55\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.589564 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-combined-ca-bundle\") pod \"31448de0-cbd5-4d71-8107-881c0327fb55\" (UID: \"31448de0-cbd5-4d71-8107-881c0327fb55\") " Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.589809 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31448de0-cbd5-4d71-8107-881c0327fb55-logs" (OuterVolumeSpecName: "logs") pod "31448de0-cbd5-4d71-8107-881c0327fb55" (UID: "31448de0-cbd5-4d71-8107-881c0327fb55"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.590277 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31448de0-cbd5-4d71-8107-881c0327fb55-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.601449 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31448de0-cbd5-4d71-8107-881c0327fb55-kube-api-access-qnrtj" (OuterVolumeSpecName: "kube-api-access-qnrtj") pod "31448de0-cbd5-4d71-8107-881c0327fb55" (UID: "31448de0-cbd5-4d71-8107-881c0327fb55"). InnerVolumeSpecName "kube-api-access-qnrtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.618071 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "31448de0-cbd5-4d71-8107-881c0327fb55" (UID: "31448de0-cbd5-4d71-8107-881c0327fb55"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.630165 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31448de0-cbd5-4d71-8107-881c0327fb55" (UID: "31448de0-cbd5-4d71-8107-881c0327fb55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.673802 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-config-data" (OuterVolumeSpecName: "config-data") pod "31448de0-cbd5-4d71-8107-881c0327fb55" (UID: "31448de0-cbd5-4d71-8107-881c0327fb55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.683903 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"31448de0-cbd5-4d71-8107-881c0327fb55","Type":"ContainerDied","Data":"8b0e5903dccd9d2e5e23e46710ab47f5e2c1b9dea5ccf431992056feeec7f78e"} Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.683961 4810 scope.go:117] "RemoveContainer" containerID="64d0d64a5da470f2a5ddc1de4e8c04b14ee710ccd412090965ee7e074e161cc8" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.684098 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.691308 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnrtj\" (UniqueName: \"kubernetes.io/projected/31448de0-cbd5-4d71-8107-881c0327fb55-kube-api-access-qnrtj\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.691357 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.691369 4810 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.691377 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31448de0-cbd5-4d71-8107-881c0327fb55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.745420 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.756696 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.769395 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:30:51 crc kubenswrapper[4810]: E0219 15:30:51.769828 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31448de0-cbd5-4d71-8107-881c0327fb55" containerName="watcher-decision-engine" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.769839 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="31448de0-cbd5-4d71-8107-881c0327fb55" containerName="watcher-decision-engine" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.770071 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="31448de0-cbd5-4d71-8107-881c0327fb55" containerName="watcher-decision-engine" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.770714 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.776386 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.779525 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.793085 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.793131 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.793164 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.793231 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-logs\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.793297 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jqm9\" (UniqueName: \"kubernetes.io/projected/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-kube-api-access-5jqm9\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.894479 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jqm9\" (UniqueName: \"kubernetes.io/projected/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-kube-api-access-5jqm9\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.894529 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.894555 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.894590 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.894657 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-logs\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.895156 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-logs\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.898965 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.899051 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.899456 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:51 crc kubenswrapper[4810]: I0219 15:30:51.916957 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jqm9\" (UniqueName: \"kubernetes.io/projected/fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d-kube-api-access-5jqm9\") pod \"watcher-decision-engine-0\" (UID: \"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d\") " pod="openstack/watcher-decision-engine-0" Feb 19 15:30:52 crc kubenswrapper[4810]: I0219 15:30:52.090652 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 15:30:52 crc kubenswrapper[4810]: I0219 15:30:52.568857 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 15:30:52 crc kubenswrapper[4810]: W0219 15:30:52.577409 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbbd48c8_49fb_4e51_9ba7_f7b37f681b3d.slice/crio-d212b20209dbc57da6d250d4636ac4a952077766a0a1e7e5d41f8f5326424149 WatchSource:0}: Error finding container d212b20209dbc57da6d250d4636ac4a952077766a0a1e7e5d41f8f5326424149: Status 404 returned error can't find the container with id d212b20209dbc57da6d250d4636ac4a952077766a0a1e7e5d41f8f5326424149 Feb 19 15:30:52 crc kubenswrapper[4810]: I0219 15:30:52.694465 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38","Type":"ContainerStarted","Data":"7e16e7b6753a6364938105bbe0304a4ea0d0db7a6e7668e4cc2587c80cf11acf"} Feb 19 15:30:52 crc kubenswrapper[4810]: I0219 15:30:52.694642 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 15:30:52 crc kubenswrapper[4810]: I0219 15:30:52.697771 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-58xq9" event={"ID":"972d6f5e-3edf-4b6e-bdde-39c580caea31","Type":"ContainerStarted","Data":"19b609d4be47506e6c511dded32be9ffbc5fec785d73d8309ff072ff0f1cf61d"} Feb 19 15:30:52 crc kubenswrapper[4810]: I0219 15:30:52.700186 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d","Type":"ContainerStarted","Data":"d212b20209dbc57da6d250d4636ac4a952077766a0a1e7e5d41f8f5326424149"} Feb 19 15:30:52 crc kubenswrapper[4810]: I0219 15:30:52.759384 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.467905205 podStartE2EDuration="12.759367304s" podCreationTimestamp="2026-02-19 15:30:40 +0000 UTC" firstStartedPulling="2026-02-19 15:30:41.568020856 +0000 UTC m=+1271.050050980" lastFinishedPulling="2026-02-19 15:30:50.859482945 +0000 UTC m=+1280.341513079" observedRunningTime="2026-02-19 15:30:52.732401463 +0000 UTC m=+1282.214431587" watchObservedRunningTime="2026-02-19 15:30:52.759367304 +0000 UTC m=+1282.241397428" Feb 19 15:30:53 crc kubenswrapper[4810]: I0219 15:30:53.449364 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31448de0-cbd5-4d71-8107-881c0327fb55" path="/var/lib/kubelet/pods/31448de0-cbd5-4d71-8107-881c0327fb55/volumes" Feb 19 15:30:53 crc kubenswrapper[4810]: I0219 15:30:53.713676 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d","Type":"ContainerStarted","Data":"0de2c60fe202f0db8743c5bff46dd731a5b2adf31323c862bd094408a75fcc79"} Feb 19 15:30:53 crc kubenswrapper[4810]: I0219 15:30:53.740197 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-58xq9" podStartSLOduration=3.730302551 podStartE2EDuration="13.740177785s" podCreationTimestamp="2026-02-19 15:30:40 +0000 UTC" firstStartedPulling="2026-02-19 15:30:41.366358039 +0000 UTC m=+1270.848388163" lastFinishedPulling="2026-02-19 15:30:51.376233273 +0000 UTC m=+1280.858263397" observedRunningTime="2026-02-19 15:30:52.75673173 +0000 UTC m=+1282.238761854" watchObservedRunningTime="2026-02-19 15:30:53.740177785 +0000 UTC m=+1283.222207929" Feb 19 15:30:53 crc kubenswrapper[4810]: I0219 15:30:53.742237 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.742229905 podStartE2EDuration="2.742229905s" podCreationTimestamp="2026-02-19 15:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:30:53.735291355 +0000 UTC m=+1283.217321479" watchObservedRunningTime="2026-02-19 15:30:53.742229905 +0000 UTC m=+1283.224260039" Feb 19 15:30:53 crc kubenswrapper[4810]: I0219 15:30:53.948683 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:30:53 crc kubenswrapper[4810]: I0219 15:30:53.949000 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" containerName="glance-log" containerID="cri-o://a5b582a70e74643764c5c634d13034b32fb543e1cf8d7471fc29e46f970a5375" gracePeriod=30 Feb 19 15:30:53 crc kubenswrapper[4810]: I0219 15:30:53.949084 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" containerName="glance-httpd" containerID="cri-o://d4f8797a868301d8a063b9ee6d65b2b3734521a53ee7809ebafe72723ce9d630" gracePeriod=30 Feb 19 15:30:54 crc kubenswrapper[4810]: I0219 15:30:54.725202 4810 generic.go:334] "Generic (PLEG): container finished" podID="25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" containerID="a5b582a70e74643764c5c634d13034b32fb543e1cf8d7471fc29e46f970a5375" exitCode=143 Feb 19 15:30:54 crc kubenswrapper[4810]: I0219 15:30:54.725280 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c","Type":"ContainerDied","Data":"a5b582a70e74643764c5c634d13034b32fb543e1cf8d7471fc29e46f970a5375"} Feb 19 15:30:54 crc kubenswrapper[4810]: I0219 15:30:54.865352 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.170:9292/healthcheck\": read tcp 10.217.0.2:40794->10.217.0.170:9292: read: connection reset by peer" Feb 19 15:30:54 crc kubenswrapper[4810]: I0219 15:30:54.865429 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.170:9292/healthcheck\": read tcp 10.217.0.2:40804->10.217.0.170:9292: read: connection reset by peer" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.432610 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.475282 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-combined-ca-bundle\") pod \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.475398 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-scripts\") pod \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.475443 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-public-tls-certs\") pod \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.475563 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-httpd-run\") pod \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.475702 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h47wl\" (UniqueName: \"kubernetes.io/projected/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-kube-api-access-h47wl\") pod \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.475779 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-logs\") pod \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.475807 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-config-data\") pod \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.475890 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\" (UID: \"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c\") " Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.476298 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" (UID: "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.476613 4810 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.482373 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-kube-api-access-h47wl" (OuterVolumeSpecName: "kube-api-access-h47wl") pod "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" (UID: "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c"). InnerVolumeSpecName "kube-api-access-h47wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.482763 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-logs" (OuterVolumeSpecName: "logs") pod "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" (UID: "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.487486 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-scripts" (OuterVolumeSpecName: "scripts") pod "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" (UID: "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.489198 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" (UID: "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.518693 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" (UID: "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.545708 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-config-data" (OuterVolumeSpecName: "config-data") pod "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" (UID: "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.569128 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" (UID: "25d19ba7-dcfe-443b-9aec-afd1fc1bee1c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.580645 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.584478 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.584843 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.584936 4810 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.585016 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h47wl\" (UniqueName: \"kubernetes.io/projected/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-kube-api-access-h47wl\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.585097 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.585185 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.614896 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.686685 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.738380 4810 generic.go:334] "Generic (PLEG): container finished" podID="25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" containerID="d4f8797a868301d8a063b9ee6d65b2b3734521a53ee7809ebafe72723ce9d630" exitCode=0 Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.738449 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c","Type":"ContainerDied","Data":"d4f8797a868301d8a063b9ee6d65b2b3734521a53ee7809ebafe72723ce9d630"} Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.738483 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d19ba7-dcfe-443b-9aec-afd1fc1bee1c","Type":"ContainerDied","Data":"8e2ec136ee1702cfd45683995f2deb05321488c4c561ae75b5ecc3c327d09b7a"} Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.738504 4810 scope.go:117] "RemoveContainer" containerID="d4f8797a868301d8a063b9ee6d65b2b3734521a53ee7809ebafe72723ce9d630" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.738715 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.773710 4810 scope.go:117] "RemoveContainer" containerID="a5b582a70e74643764c5c634d13034b32fb543e1cf8d7471fc29e46f970a5375" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.794681 4810 scope.go:117] "RemoveContainer" containerID="d4f8797a868301d8a063b9ee6d65b2b3734521a53ee7809ebafe72723ce9d630" Feb 19 15:30:55 crc kubenswrapper[4810]: E0219 15:30:55.795744 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f8797a868301d8a063b9ee6d65b2b3734521a53ee7809ebafe72723ce9d630\": container with ID starting with d4f8797a868301d8a063b9ee6d65b2b3734521a53ee7809ebafe72723ce9d630 not found: ID does not exist" containerID="d4f8797a868301d8a063b9ee6d65b2b3734521a53ee7809ebafe72723ce9d630" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.795797 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f8797a868301d8a063b9ee6d65b2b3734521a53ee7809ebafe72723ce9d630"} err="failed to get container status \"d4f8797a868301d8a063b9ee6d65b2b3734521a53ee7809ebafe72723ce9d630\": rpc error: code = NotFound desc = could not find container \"d4f8797a868301d8a063b9ee6d65b2b3734521a53ee7809ebafe72723ce9d630\": container with ID starting with d4f8797a868301d8a063b9ee6d65b2b3734521a53ee7809ebafe72723ce9d630 not found: ID does not exist" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.795822 4810 scope.go:117] "RemoveContainer" containerID="a5b582a70e74643764c5c634d13034b32fb543e1cf8d7471fc29e46f970a5375" Feb 19 15:30:55 crc kubenswrapper[4810]: E0219 15:30:55.796175 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5b582a70e74643764c5c634d13034b32fb543e1cf8d7471fc29e46f970a5375\": container with ID starting with a5b582a70e74643764c5c634d13034b32fb543e1cf8d7471fc29e46f970a5375 not found: ID does not exist" containerID="a5b582a70e74643764c5c634d13034b32fb543e1cf8d7471fc29e46f970a5375" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.796207 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5b582a70e74643764c5c634d13034b32fb543e1cf8d7471fc29e46f970a5375"} err="failed to get container status \"a5b582a70e74643764c5c634d13034b32fb543e1cf8d7471fc29e46f970a5375\": rpc error: code = NotFound desc = could not find container \"a5b582a70e74643764c5c634d13034b32fb543e1cf8d7471fc29e46f970a5375\": container with ID starting with a5b582a70e74643764c5c634d13034b32fb543e1cf8d7471fc29e46f970a5375 not found: ID does not exist" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.848383 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.868070 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.878469 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:30:55 crc kubenswrapper[4810]: E0219 15:30:55.878998 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" containerName="glance-httpd" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.879026 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" containerName="glance-httpd" Feb 19 15:30:55 crc kubenswrapper[4810]: E0219 15:30:55.879051 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" containerName="glance-log" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.879060 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" containerName="glance-log" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.879303 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" containerName="glance-log" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.879358 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" containerName="glance-httpd" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.880541 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.885899 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.885919 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.923626 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.923880 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="addf00fe-9b9b-41d4-bd81-4e5f2c339fff" containerName="glance-log" containerID="cri-o://1f3fdcf1ede0870a63fe45639c44983223bdd32c1b88c14ab338ab8b9a3f039c" gracePeriod=30 Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.924308 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="addf00fe-9b9b-41d4-bd81-4e5f2c339fff" containerName="glance-httpd" containerID="cri-o://598ccff405c1e1906ae06df4732ddadd78333232ea68a893ba2db6513c96c3dd" gracePeriod=30 Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.929548 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-internal-api-0" podUID="addf00fe-9b9b-41d4-bd81-4e5f2c339fff" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.169:9292/healthcheck\": EOF" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.942541 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.991770 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a4af93-6f80-4097-a964-2e3f3055fd3b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.991825 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.991874 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41a4af93-6f80-4097-a964-2e3f3055fd3b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.991896 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a4af93-6f80-4097-a964-2e3f3055fd3b-scripts\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.991918 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thn9w\" (UniqueName: \"kubernetes.io/projected/41a4af93-6f80-4097-a964-2e3f3055fd3b-kube-api-access-thn9w\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.992276 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a4af93-6f80-4097-a964-2e3f3055fd3b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.992469 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a4af93-6f80-4097-a964-2e3f3055fd3b-logs\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:55 crc kubenswrapper[4810]: I0219 15:30:55.992515 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a4af93-6f80-4097-a964-2e3f3055fd3b-config-data\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.093596 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a4af93-6f80-4097-a964-2e3f3055fd3b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.093658 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a4af93-6f80-4097-a964-2e3f3055fd3b-logs\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.093678 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a4af93-6f80-4097-a964-2e3f3055fd3b-config-data\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.093718 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a4af93-6f80-4097-a964-2e3f3055fd3b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.093737 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.093771 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41a4af93-6f80-4097-a964-2e3f3055fd3b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.093790 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a4af93-6f80-4097-a964-2e3f3055fd3b-scripts\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.093812 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thn9w\" (UniqueName: \"kubernetes.io/projected/41a4af93-6f80-4097-a964-2e3f3055fd3b-kube-api-access-thn9w\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.094983 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.095201 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/41a4af93-6f80-4097-a964-2e3f3055fd3b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.095240 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a4af93-6f80-4097-a964-2e3f3055fd3b-logs\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.099123 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41a4af93-6f80-4097-a964-2e3f3055fd3b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.099162 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a4af93-6f80-4097-a964-2e3f3055fd3b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.103754 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a4af93-6f80-4097-a964-2e3f3055fd3b-scripts\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.111781 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a4af93-6f80-4097-a964-2e3f3055fd3b-config-data\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.115518 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thn9w\" (UniqueName: \"kubernetes.io/projected/41a4af93-6f80-4097-a964-2e3f3055fd3b-kube-api-access-thn9w\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.131744 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"41a4af93-6f80-4097-a964-2e3f3055fd3b\") " pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.207477 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.656258 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.656976 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="ceilometer-central-agent" containerID="cri-o://2271728f7abc2e8188f15e24a952eadeafff700ee538da3ec50e4feeabb89491" gracePeriod=30 Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.657047 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="ceilometer-notification-agent" containerID="cri-o://fe9552106d30d58d9eba25443c982dcd127f8bde0f0223d82b19cb349c04ea05" gracePeriod=30 Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.657063 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="proxy-httpd" containerID="cri-o://7e16e7b6753a6364938105bbe0304a4ea0d0db7a6e7668e4cc2587c80cf11acf" gracePeriod=30 Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.657031 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="sg-core" containerID="cri-o://b5e72355bab674a636cae3f88e4f871dcc819f970a52aa01b409a42a1d43b5a1" gracePeriod=30 Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.747819 4810 generic.go:334] "Generic (PLEG): container finished" podID="addf00fe-9b9b-41d4-bd81-4e5f2c339fff" containerID="1f3fdcf1ede0870a63fe45639c44983223bdd32c1b88c14ab338ab8b9a3f039c" exitCode=143 Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.747883 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"addf00fe-9b9b-41d4-bd81-4e5f2c339fff","Type":"ContainerDied","Data":"1f3fdcf1ede0870a63fe45639c44983223bdd32c1b88c14ab338ab8b9a3f039c"} Feb 19 15:30:56 crc kubenswrapper[4810]: I0219 15:30:56.783508 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 15:30:56 crc kubenswrapper[4810]: W0219 15:30:56.790084 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41a4af93_6f80_4097_a964_2e3f3055fd3b.slice/crio-982653081c226b3e310d066fa8db4a3c8e86304fe9718438e4fe3b604b0ee8fe WatchSource:0}: Error finding container 982653081c226b3e310d066fa8db4a3c8e86304fe9718438e4fe3b604b0ee8fe: Status 404 returned error can't find the container with id 982653081c226b3e310d066fa8db4a3c8e86304fe9718438e4fe3b604b0ee8fe Feb 19 15:30:57 crc kubenswrapper[4810]: I0219 15:30:57.450264 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25d19ba7-dcfe-443b-9aec-afd1fc1bee1c" path="/var/lib/kubelet/pods/25d19ba7-dcfe-443b-9aec-afd1fc1bee1c/volumes" Feb 19 15:30:57 crc kubenswrapper[4810]: I0219 15:30:57.761054 4810 generic.go:334] "Generic (PLEG): container finished" podID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerID="7e16e7b6753a6364938105bbe0304a4ea0d0db7a6e7668e4cc2587c80cf11acf" exitCode=0 Feb 19 15:30:57 crc kubenswrapper[4810]: I0219 15:30:57.761293 4810 generic.go:334] "Generic (PLEG): container finished" podID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerID="b5e72355bab674a636cae3f88e4f871dcc819f970a52aa01b409a42a1d43b5a1" exitCode=2 Feb 19 15:30:57 crc kubenswrapper[4810]: I0219 15:30:57.761304 4810 generic.go:334] "Generic (PLEG): container finished" podID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerID="2271728f7abc2e8188f15e24a952eadeafff700ee538da3ec50e4feeabb89491" exitCode=0 Feb 19 15:30:57 crc kubenswrapper[4810]: I0219 15:30:57.761227 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38","Type":"ContainerDied","Data":"7e16e7b6753a6364938105bbe0304a4ea0d0db7a6e7668e4cc2587c80cf11acf"} Feb 19 15:30:57 crc kubenswrapper[4810]: I0219 15:30:57.761383 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38","Type":"ContainerDied","Data":"b5e72355bab674a636cae3f88e4f871dcc819f970a52aa01b409a42a1d43b5a1"} Feb 19 15:30:57 crc kubenswrapper[4810]: I0219 15:30:57.761397 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38","Type":"ContainerDied","Data":"2271728f7abc2e8188f15e24a952eadeafff700ee538da3ec50e4feeabb89491"} Feb 19 15:30:57 crc kubenswrapper[4810]: I0219 15:30:57.765062 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41a4af93-6f80-4097-a964-2e3f3055fd3b","Type":"ContainerStarted","Data":"0e6adbe25af33318a0ec838bf869159bced4e70a9cd918d5db27ffb822e355cc"} Feb 19 15:30:57 crc kubenswrapper[4810]: I0219 15:30:57.765231 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41a4af93-6f80-4097-a964-2e3f3055fd3b","Type":"ContainerStarted","Data":"982653081c226b3e310d066fa8db4a3c8e86304fe9718438e4fe3b604b0ee8fe"} Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.616608 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.643254 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-internal-tls-certs\") pod \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.643318 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.643356 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-httpd-run\") pod \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.643418 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgrng\" (UniqueName: \"kubernetes.io/projected/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-kube-api-access-tgrng\") pod \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.643487 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-scripts\") pod \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.643524 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-config-data\") pod \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.643555 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-logs\") pod \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.643569 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-combined-ca-bundle\") pod \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\" (UID: \"addf00fe-9b9b-41d4-bd81-4e5f2c339fff\") " Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.644504 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "addf00fe-9b9b-41d4-bd81-4e5f2c339fff" (UID: "addf00fe-9b9b-41d4-bd81-4e5f2c339fff"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.644598 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-logs" (OuterVolumeSpecName: "logs") pod "addf00fe-9b9b-41d4-bd81-4e5f2c339fff" (UID: "addf00fe-9b9b-41d4-bd81-4e5f2c339fff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.660584 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "addf00fe-9b9b-41d4-bd81-4e5f2c339fff" (UID: "addf00fe-9b9b-41d4-bd81-4e5f2c339fff"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.667400 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-kube-api-access-tgrng" (OuterVolumeSpecName: "kube-api-access-tgrng") pod "addf00fe-9b9b-41d4-bd81-4e5f2c339fff" (UID: "addf00fe-9b9b-41d4-bd81-4e5f2c339fff"). InnerVolumeSpecName "kube-api-access-tgrng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.698493 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-scripts" (OuterVolumeSpecName: "scripts") pod "addf00fe-9b9b-41d4-bd81-4e5f2c339fff" (UID: "addf00fe-9b9b-41d4-bd81-4e5f2c339fff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.740085 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "addf00fe-9b9b-41d4-bd81-4e5f2c339fff" (UID: "addf00fe-9b9b-41d4-bd81-4e5f2c339fff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.744351 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-config-data" (OuterVolumeSpecName: "config-data") pod "addf00fe-9b9b-41d4-bd81-4e5f2c339fff" (UID: "addf00fe-9b9b-41d4-bd81-4e5f2c339fff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.745784 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.745815 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.748063 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.748194 4810 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.748213 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgrng\" (UniqueName: \"kubernetes.io/projected/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-kube-api-access-tgrng\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.748227 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.748267 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.783991 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.786912 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"41a4af93-6f80-4097-a964-2e3f3055fd3b","Type":"ContainerStarted","Data":"06214b2a930e66c57e53806652bef4bf287da47703b5c4cea0142b8a75acc4b7"} Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.793036 4810 generic.go:334] "Generic (PLEG): container finished" podID="addf00fe-9b9b-41d4-bd81-4e5f2c339fff" containerID="598ccff405c1e1906ae06df4732ddadd78333232ea68a893ba2db6513c96c3dd" exitCode=0 Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.793076 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"addf00fe-9b9b-41d4-bd81-4e5f2c339fff","Type":"ContainerDied","Data":"598ccff405c1e1906ae06df4732ddadd78333232ea68a893ba2db6513c96c3dd"} Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.793103 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"addf00fe-9b9b-41d4-bd81-4e5f2c339fff","Type":"ContainerDied","Data":"58cd5599b992d54d64de99ed5546382a6f34cf94866c9af8e9254502abddbf03"} Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.793120 4810 scope.go:117] "RemoveContainer" containerID="598ccff405c1e1906ae06df4732ddadd78333232ea68a893ba2db6513c96c3dd" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.793184 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.801550 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "addf00fe-9b9b-41d4-bd81-4e5f2c339fff" (UID: "addf00fe-9b9b-41d4-bd81-4e5f2c339fff"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.826138 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.826120014 podStartE2EDuration="3.826120014s" podCreationTimestamp="2026-02-19 15:30:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:30:58.809777983 +0000 UTC m=+1288.291808097" watchObservedRunningTime="2026-02-19 15:30:58.826120014 +0000 UTC m=+1288.308150138" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.850053 4810 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/addf00fe-9b9b-41d4-bd81-4e5f2c339fff-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.850077 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.898014 4810 scope.go:117] "RemoveContainer" containerID="1f3fdcf1ede0870a63fe45639c44983223bdd32c1b88c14ab338ab8b9a3f039c" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.932412 4810 scope.go:117] "RemoveContainer" containerID="598ccff405c1e1906ae06df4732ddadd78333232ea68a893ba2db6513c96c3dd" Feb 19 15:30:58 crc kubenswrapper[4810]: E0219 15:30:58.932923 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"598ccff405c1e1906ae06df4732ddadd78333232ea68a893ba2db6513c96c3dd\": container with ID starting with 598ccff405c1e1906ae06df4732ddadd78333232ea68a893ba2db6513c96c3dd not found: ID does not exist" containerID="598ccff405c1e1906ae06df4732ddadd78333232ea68a893ba2db6513c96c3dd" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.932965 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"598ccff405c1e1906ae06df4732ddadd78333232ea68a893ba2db6513c96c3dd"} err="failed to get container status \"598ccff405c1e1906ae06df4732ddadd78333232ea68a893ba2db6513c96c3dd\": rpc error: code = NotFound desc = could not find container \"598ccff405c1e1906ae06df4732ddadd78333232ea68a893ba2db6513c96c3dd\": container with ID starting with 598ccff405c1e1906ae06df4732ddadd78333232ea68a893ba2db6513c96c3dd not found: ID does not exist" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.932991 4810 scope.go:117] "RemoveContainer" containerID="1f3fdcf1ede0870a63fe45639c44983223bdd32c1b88c14ab338ab8b9a3f039c" Feb 19 15:30:58 crc kubenswrapper[4810]: E0219 15:30:58.933345 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3fdcf1ede0870a63fe45639c44983223bdd32c1b88c14ab338ab8b9a3f039c\": container with ID starting with 1f3fdcf1ede0870a63fe45639c44983223bdd32c1b88c14ab338ab8b9a3f039c not found: ID does not exist" containerID="1f3fdcf1ede0870a63fe45639c44983223bdd32c1b88c14ab338ab8b9a3f039c" Feb 19 15:30:58 crc kubenswrapper[4810]: I0219 15:30:58.933375 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3fdcf1ede0870a63fe45639c44983223bdd32c1b88c14ab338ab8b9a3f039c"} err="failed to get container status \"1f3fdcf1ede0870a63fe45639c44983223bdd32c1b88c14ab338ab8b9a3f039c\": rpc error: code = NotFound desc = could not find container \"1f3fdcf1ede0870a63fe45639c44983223bdd32c1b88c14ab338ab8b9a3f039c\": container with ID starting with 1f3fdcf1ede0870a63fe45639c44983223bdd32c1b88c14ab338ab8b9a3f039c not found: ID does not exist" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.141630 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.154229 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.169961 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:30:59 crc kubenswrapper[4810]: E0219 15:30:59.170444 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="addf00fe-9b9b-41d4-bd81-4e5f2c339fff" containerName="glance-log" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.170470 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="addf00fe-9b9b-41d4-bd81-4e5f2c339fff" containerName="glance-log" Feb 19 15:30:59 crc kubenswrapper[4810]: E0219 15:30:59.170507 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="addf00fe-9b9b-41d4-bd81-4e5f2c339fff" containerName="glance-httpd" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.170517 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="addf00fe-9b9b-41d4-bd81-4e5f2c339fff" containerName="glance-httpd" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.170716 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="addf00fe-9b9b-41d4-bd81-4e5f2c339fff" containerName="glance-log" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.170739 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="addf00fe-9b9b-41d4-bd81-4e5f2c339fff" containerName="glance-httpd" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.171923 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.176489 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.177971 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.188510 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.257581 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.257632 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-logs\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.257659 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.257691 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.257721 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.257757 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.257782 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.257811 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sprcz\" (UniqueName: \"kubernetes.io/projected/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-kube-api-access-sprcz\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.384090 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.384178 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sprcz\" (UniqueName: \"kubernetes.io/projected/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-kube-api-access-sprcz\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.384292 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.384350 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-logs\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.384389 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.384438 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.384490 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.384547 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.384781 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.385128 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-logs\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.385176 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.389454 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.390056 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.390479 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.397636 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.418095 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sprcz\" (UniqueName: \"kubernetes.io/projected/6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4-kube-api-access-sprcz\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.422224 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4\") " pod="openstack/glance-default-internal-api-0" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.452716 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="addf00fe-9b9b-41d4-bd81-4e5f2c339fff" path="/var/lib/kubelet/pods/addf00fe-9b9b-41d4-bd81-4e5f2c339fff/volumes" Feb 19 15:30:59 crc kubenswrapper[4810]: I0219 15:30:59.490115 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 15:31:00 crc kubenswrapper[4810]: I0219 15:31:00.020294 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 15:31:00 crc kubenswrapper[4810]: I0219 15:31:00.822932 4810 generic.go:334] "Generic (PLEG): container finished" podID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerID="fe9552106d30d58d9eba25443c982dcd127f8bde0f0223d82b19cb349c04ea05" exitCode=0 Feb 19 15:31:00 crc kubenswrapper[4810]: I0219 15:31:00.823413 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38","Type":"ContainerDied","Data":"fe9552106d30d58d9eba25443c982dcd127f8bde0f0223d82b19cb349c04ea05"} Feb 19 15:31:00 crc kubenswrapper[4810]: I0219 15:31:00.825733 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4","Type":"ContainerStarted","Data":"9a7a844afa569d17708a027299d20c3f8da45965cf7fc8489248263f5ff699a9"} Feb 19 15:31:00 crc kubenswrapper[4810]: I0219 15:31:00.825755 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4","Type":"ContainerStarted","Data":"030adf3e30e267b79173d4d483260554c095ee440f7945b2cccc696243dbe0bd"} Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.207760 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.323367 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-config-data\") pod \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.323645 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxthg\" (UniqueName: \"kubernetes.io/projected/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-kube-api-access-jxthg\") pod \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.323696 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-combined-ca-bundle\") pod \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.323801 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-run-httpd\") pod \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.323909 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-sg-core-conf-yaml\") pod \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.323930 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-scripts\") pod \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.323965 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-log-httpd\") pod \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\" (UID: \"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38\") " Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.324368 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" (UID: "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.324767 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" (UID: "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.334500 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-scripts" (OuterVolumeSpecName: "scripts") pod "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" (UID: "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.334653 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-kube-api-access-jxthg" (OuterVolumeSpecName: "kube-api-access-jxthg") pod "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" (UID: "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38"). InnerVolumeSpecName "kube-api-access-jxthg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.350823 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" (UID: "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.411435 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" (UID: "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.421061 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-config-data" (OuterVolumeSpecName: "config-data") pod "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" (UID: "bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.426492 4810 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.426527 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.426540 4810 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.426552 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.426565 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxthg\" (UniqueName: \"kubernetes.io/projected/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-kube-api-access-jxthg\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.426579 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.426590 4810 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.837784 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4","Type":"ContainerStarted","Data":"8f482a6ed999680b2d59aa1c6bace5119495fcc6394430164430e6c470400848"} Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.841443 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38","Type":"ContainerDied","Data":"4ff3102af0db331be9ee6d83e5b6fe8b5c513c45a0e210d2e6a0f658eeddde5d"} Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.841500 4810 scope.go:117] "RemoveContainer" containerID="7e16e7b6753a6364938105bbe0304a4ea0d0db7a6e7668e4cc2587c80cf11acf" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.841683 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.861798 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.861781425 podStartE2EDuration="2.861781425s" podCreationTimestamp="2026-02-19 15:30:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:31:01.856138407 +0000 UTC m=+1291.338168531" watchObservedRunningTime="2026-02-19 15:31:01.861781425 +0000 UTC m=+1291.343811549" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.875701 4810 scope.go:117] "RemoveContainer" containerID="b5e72355bab674a636cae3f88e4f871dcc819f970a52aa01b409a42a1d43b5a1" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.877658 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.884905 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.899091 4810 scope.go:117] "RemoveContainer" containerID="fe9552106d30d58d9eba25443c982dcd127f8bde0f0223d82b19cb349c04ea05" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.906865 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:01 crc kubenswrapper[4810]: E0219 15:31:01.907410 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="ceilometer-central-agent" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.907432 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="ceilometer-central-agent" Feb 19 15:31:01 crc kubenswrapper[4810]: E0219 15:31:01.907459 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="ceilometer-notification-agent" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.907469 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="ceilometer-notification-agent" Feb 19 15:31:01 crc kubenswrapper[4810]: E0219 15:31:01.907490 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="proxy-httpd" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.907499 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="proxy-httpd" Feb 19 15:31:01 crc kubenswrapper[4810]: E0219 15:31:01.907513 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="sg-core" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.907521 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="sg-core" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.907765 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="proxy-httpd" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.907793 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="ceilometer-central-agent" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.907813 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="ceilometer-notification-agent" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.907832 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" containerName="sg-core" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.911594 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.918758 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.918988 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.924180 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:01 crc kubenswrapper[4810]: I0219 15:31:01.931138 4810 scope.go:117] "RemoveContainer" containerID="2271728f7abc2e8188f15e24a952eadeafff700ee538da3ec50e4feeabb89491" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.039043 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.039377 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b358095c-30ba-4f90-b627-63650857fc49-log-httpd\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.039426 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.039486 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b358095c-30ba-4f90-b627-63650857fc49-run-httpd\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.039573 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75hx8\" (UniqueName: \"kubernetes.io/projected/b358095c-30ba-4f90-b627-63650857fc49-kube-api-access-75hx8\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.039649 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-config-data\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.039666 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-scripts\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.091342 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.131969 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.143219 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-config-data\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.143260 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-scripts\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.143309 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.143441 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b358095c-30ba-4f90-b627-63650857fc49-log-httpd\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.143482 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.143534 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b358095c-30ba-4f90-b627-63650857fc49-run-httpd\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.143571 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75hx8\" (UniqueName: \"kubernetes.io/projected/b358095c-30ba-4f90-b627-63650857fc49-kube-api-access-75hx8\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.144446 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b358095c-30ba-4f90-b627-63650857fc49-log-httpd\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.144868 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b358095c-30ba-4f90-b627-63650857fc49-run-httpd\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.151394 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.152020 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.163206 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-scripts\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.165581 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75hx8\" (UniqueName: \"kubernetes.io/projected/b358095c-30ba-4f90-b627-63650857fc49-kube-api-access-75hx8\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.168714 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-config-data\") pod \"ceilometer-0\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.233251 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.732090 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.741618 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.857639 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b358095c-30ba-4f90-b627-63650857fc49","Type":"ContainerStarted","Data":"6c4b9f3d928f0cccc0fb2b10a58ce849fff32f26291f366f0504f6cc2013df4e"} Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.858296 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 15:31:02 crc kubenswrapper[4810]: I0219 15:31:02.914367 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 19 15:31:03 crc kubenswrapper[4810]: I0219 15:31:03.438375 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:03 crc kubenswrapper[4810]: I0219 15:31:03.451205 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38" path="/var/lib/kubelet/pods/bbe979b4-aa62-4c1f-8329-6eb2ae2cfa38/volumes" Feb 19 15:31:03 crc kubenswrapper[4810]: I0219 15:31:03.874821 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b358095c-30ba-4f90-b627-63650857fc49","Type":"ContainerStarted","Data":"c9c0491a003bacacbdcb1eaf1f7857933abd385ecfb0908efea3150739289951"} Feb 19 15:31:03 crc kubenswrapper[4810]: I0219 15:31:03.875278 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b358095c-30ba-4f90-b627-63650857fc49","Type":"ContainerStarted","Data":"c213283e307bb78ffccee82c18c4205b43e9c53c95b737f8fce428b65bf54094"} Feb 19 15:31:04 crc kubenswrapper[4810]: I0219 15:31:04.901300 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b358095c-30ba-4f90-b627-63650857fc49","Type":"ContainerStarted","Data":"961cd98ef99173e92df54e7b424605d95b61a834adb6a8bf1e7c9b237c0afdc7"} Feb 19 15:31:06 crc kubenswrapper[4810]: I0219 15:31:06.208381 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 15:31:06 crc kubenswrapper[4810]: I0219 15:31:06.208716 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 15:31:06 crc kubenswrapper[4810]: I0219 15:31:06.243120 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 15:31:06 crc kubenswrapper[4810]: I0219 15:31:06.248735 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 15:31:06 crc kubenswrapper[4810]: I0219 15:31:06.919241 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b358095c-30ba-4f90-b627-63650857fc49","Type":"ContainerStarted","Data":"468167a7d908f6d23e0b2d81fbf04b7c71508eb08eaac8f05a5952b968bfd835"} Feb 19 15:31:06 crc kubenswrapper[4810]: I0219 15:31:06.919567 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 15:31:06 crc kubenswrapper[4810]: I0219 15:31:06.919581 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 15:31:06 crc kubenswrapper[4810]: I0219 15:31:06.919727 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="ceilometer-central-agent" containerID="cri-o://c213283e307bb78ffccee82c18c4205b43e9c53c95b737f8fce428b65bf54094" gracePeriod=30 Feb 19 15:31:06 crc kubenswrapper[4810]: I0219 15:31:06.919803 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="proxy-httpd" containerID="cri-o://468167a7d908f6d23e0b2d81fbf04b7c71508eb08eaac8f05a5952b968bfd835" gracePeriod=30 Feb 19 15:31:06 crc kubenswrapper[4810]: I0219 15:31:06.919839 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="sg-core" containerID="cri-o://961cd98ef99173e92df54e7b424605d95b61a834adb6a8bf1e7c9b237c0afdc7" gracePeriod=30 Feb 19 15:31:06 crc kubenswrapper[4810]: I0219 15:31:06.919868 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="ceilometer-notification-agent" containerID="cri-o://c9c0491a003bacacbdcb1eaf1f7857933abd385ecfb0908efea3150739289951" gracePeriod=30 Feb 19 15:31:06 crc kubenswrapper[4810]: I0219 15:31:06.952283 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.983119187 podStartE2EDuration="5.952261726s" podCreationTimestamp="2026-02-19 15:31:01 +0000 UTC" firstStartedPulling="2026-02-19 15:31:02.741306692 +0000 UTC m=+1292.223336826" lastFinishedPulling="2026-02-19 15:31:05.710449241 +0000 UTC m=+1295.192479365" observedRunningTime="2026-02-19 15:31:06.945245654 +0000 UTC m=+1296.427275778" watchObservedRunningTime="2026-02-19 15:31:06.952261726 +0000 UTC m=+1296.434291860" Feb 19 15:31:07 crc kubenswrapper[4810]: I0219 15:31:07.933058 4810 generic.go:334] "Generic (PLEG): container finished" podID="b358095c-30ba-4f90-b627-63650857fc49" containerID="468167a7d908f6d23e0b2d81fbf04b7c71508eb08eaac8f05a5952b968bfd835" exitCode=0 Feb 19 15:31:07 crc kubenswrapper[4810]: I0219 15:31:07.933346 4810 generic.go:334] "Generic (PLEG): container finished" podID="b358095c-30ba-4f90-b627-63650857fc49" containerID="961cd98ef99173e92df54e7b424605d95b61a834adb6a8bf1e7c9b237c0afdc7" exitCode=2 Feb 19 15:31:07 crc kubenswrapper[4810]: I0219 15:31:07.933362 4810 generic.go:334] "Generic (PLEG): container finished" podID="b358095c-30ba-4f90-b627-63650857fc49" containerID="c9c0491a003bacacbdcb1eaf1f7857933abd385ecfb0908efea3150739289951" exitCode=0 Feb 19 15:31:07 crc kubenswrapper[4810]: I0219 15:31:07.933420 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b358095c-30ba-4f90-b627-63650857fc49","Type":"ContainerDied","Data":"468167a7d908f6d23e0b2d81fbf04b7c71508eb08eaac8f05a5952b968bfd835"} Feb 19 15:31:07 crc kubenswrapper[4810]: I0219 15:31:07.933445 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b358095c-30ba-4f90-b627-63650857fc49","Type":"ContainerDied","Data":"961cd98ef99173e92df54e7b424605d95b61a834adb6a8bf1e7c9b237c0afdc7"} Feb 19 15:31:07 crc kubenswrapper[4810]: I0219 15:31:07.933454 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b358095c-30ba-4f90-b627-63650857fc49","Type":"ContainerDied","Data":"c9c0491a003bacacbdcb1eaf1f7857933abd385ecfb0908efea3150739289951"} Feb 19 15:31:07 crc kubenswrapper[4810]: I0219 15:31:07.936317 4810 generic.go:334] "Generic (PLEG): container finished" podID="972d6f5e-3edf-4b6e-bdde-39c580caea31" containerID="19b609d4be47506e6c511dded32be9ffbc5fec785d73d8309ff072ff0f1cf61d" exitCode=0 Feb 19 15:31:07 crc kubenswrapper[4810]: I0219 15:31:07.936430 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-58xq9" event={"ID":"972d6f5e-3edf-4b6e-bdde-39c580caea31","Type":"ContainerDied","Data":"19b609d4be47506e6c511dded32be9ffbc5fec785d73d8309ff072ff0f1cf61d"} Feb 19 15:31:08 crc kubenswrapper[4810]: I0219 15:31:08.725697 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 15:31:08 crc kubenswrapper[4810]: I0219 15:31:08.746341 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.303562 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.398663 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-scripts\") pod \"972d6f5e-3edf-4b6e-bdde-39c580caea31\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.398953 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75crr\" (UniqueName: \"kubernetes.io/projected/972d6f5e-3edf-4b6e-bdde-39c580caea31-kube-api-access-75crr\") pod \"972d6f5e-3edf-4b6e-bdde-39c580caea31\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.399019 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-combined-ca-bundle\") pod \"972d6f5e-3edf-4b6e-bdde-39c580caea31\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.399058 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-config-data\") pod \"972d6f5e-3edf-4b6e-bdde-39c580caea31\" (UID: \"972d6f5e-3edf-4b6e-bdde-39c580caea31\") " Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.404996 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-scripts" (OuterVolumeSpecName: "scripts") pod "972d6f5e-3edf-4b6e-bdde-39c580caea31" (UID: "972d6f5e-3edf-4b6e-bdde-39c580caea31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.406921 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/972d6f5e-3edf-4b6e-bdde-39c580caea31-kube-api-access-75crr" (OuterVolumeSpecName: "kube-api-access-75crr") pod "972d6f5e-3edf-4b6e-bdde-39c580caea31" (UID: "972d6f5e-3edf-4b6e-bdde-39c580caea31"). InnerVolumeSpecName "kube-api-access-75crr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.436649 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-config-data" (OuterVolumeSpecName: "config-data") pod "972d6f5e-3edf-4b6e-bdde-39c580caea31" (UID: "972d6f5e-3edf-4b6e-bdde-39c580caea31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.437119 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "972d6f5e-3edf-4b6e-bdde-39c580caea31" (UID: "972d6f5e-3edf-4b6e-bdde-39c580caea31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.490757 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.490804 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.502487 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.502748 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.502758 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972d6f5e-3edf-4b6e-bdde-39c580caea31-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.502767 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75crr\" (UniqueName: \"kubernetes.io/projected/972d6f5e-3edf-4b6e-bdde-39c580caea31-kube-api-access-75crr\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.531395 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.532101 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.915456 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.961372 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-58xq9" event={"ID":"972d6f5e-3edf-4b6e-bdde-39c580caea31","Type":"ContainerDied","Data":"0c01e86e8bf881c49ad9ca06e89f46142a0a38c81d0b0f4cbfef925f99badde4"} Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.961415 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c01e86e8bf881c49ad9ca06e89f46142a0a38c81d0b0f4cbfef925f99badde4" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.961752 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-58xq9" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.964626 4810 generic.go:334] "Generic (PLEG): container finished" podID="b358095c-30ba-4f90-b627-63650857fc49" containerID="c213283e307bb78ffccee82c18c4205b43e9c53c95b737f8fce428b65bf54094" exitCode=0 Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.965737 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.966245 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b358095c-30ba-4f90-b627-63650857fc49","Type":"ContainerDied","Data":"c213283e307bb78ffccee82c18c4205b43e9c53c95b737f8fce428b65bf54094"} Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.966278 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b358095c-30ba-4f90-b627-63650857fc49","Type":"ContainerDied","Data":"6c4b9f3d928f0cccc0fb2b10a58ce849fff32f26291f366f0504f6cc2013df4e"} Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.966299 4810 scope.go:117] "RemoveContainer" containerID="468167a7d908f6d23e0b2d81fbf04b7c71508eb08eaac8f05a5952b968bfd835" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.967498 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 15:31:09 crc kubenswrapper[4810]: I0219 15:31:09.967520 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.010465 4810 scope.go:117] "RemoveContainer" containerID="961cd98ef99173e92df54e7b424605d95b61a834adb6a8bf1e7c9b237c0afdc7" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.011184 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b358095c-30ba-4f90-b627-63650857fc49-run-httpd\") pod \"b358095c-30ba-4f90-b627-63650857fc49\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.011254 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-scripts\") pod \"b358095c-30ba-4f90-b627-63650857fc49\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.011313 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-config-data\") pod \"b358095c-30ba-4f90-b627-63650857fc49\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.011401 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-sg-core-conf-yaml\") pod \"b358095c-30ba-4f90-b627-63650857fc49\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.011496 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75hx8\" (UniqueName: \"kubernetes.io/projected/b358095c-30ba-4f90-b627-63650857fc49-kube-api-access-75hx8\") pod \"b358095c-30ba-4f90-b627-63650857fc49\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.011533 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-combined-ca-bundle\") pod \"b358095c-30ba-4f90-b627-63650857fc49\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.011596 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b358095c-30ba-4f90-b627-63650857fc49-log-httpd\") pod \"b358095c-30ba-4f90-b627-63650857fc49\" (UID: \"b358095c-30ba-4f90-b627-63650857fc49\") " Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.011752 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b358095c-30ba-4f90-b627-63650857fc49-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b358095c-30ba-4f90-b627-63650857fc49" (UID: "b358095c-30ba-4f90-b627-63650857fc49"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.012205 4810 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b358095c-30ba-4f90-b627-63650857fc49-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.013491 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b358095c-30ba-4f90-b627-63650857fc49-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b358095c-30ba-4f90-b627-63650857fc49" (UID: "b358095c-30ba-4f90-b627-63650857fc49"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.016565 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-scripts" (OuterVolumeSpecName: "scripts") pod "b358095c-30ba-4f90-b627-63650857fc49" (UID: "b358095c-30ba-4f90-b627-63650857fc49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.022799 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b358095c-30ba-4f90-b627-63650857fc49-kube-api-access-75hx8" (OuterVolumeSpecName: "kube-api-access-75hx8") pod "b358095c-30ba-4f90-b627-63650857fc49" (UID: "b358095c-30ba-4f90-b627-63650857fc49"). InnerVolumeSpecName "kube-api-access-75hx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.042234 4810 scope.go:117] "RemoveContainer" containerID="c9c0491a003bacacbdcb1eaf1f7857933abd385ecfb0908efea3150739289951" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.067840 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 15:31:10 crc kubenswrapper[4810]: E0219 15:31:10.068498 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="proxy-httpd" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.068525 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="proxy-httpd" Feb 19 15:31:10 crc kubenswrapper[4810]: E0219 15:31:10.068554 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="sg-core" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.068563 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="sg-core" Feb 19 15:31:10 crc kubenswrapper[4810]: E0219 15:31:10.068580 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="ceilometer-central-agent" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.068588 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="ceilometer-central-agent" Feb 19 15:31:10 crc kubenswrapper[4810]: E0219 15:31:10.068613 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972d6f5e-3edf-4b6e-bdde-39c580caea31" containerName="nova-cell0-conductor-db-sync" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.068622 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="972d6f5e-3edf-4b6e-bdde-39c580caea31" containerName="nova-cell0-conductor-db-sync" Feb 19 15:31:10 crc kubenswrapper[4810]: E0219 15:31:10.068639 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="ceilometer-notification-agent" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.068650 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="ceilometer-notification-agent" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.068946 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="ceilometer-notification-agent" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.068986 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="proxy-httpd" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.069004 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="ceilometer-central-agent" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.069023 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="972d6f5e-3edf-4b6e-bdde-39c580caea31" containerName="nova-cell0-conductor-db-sync" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.069042 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b358095c-30ba-4f90-b627-63650857fc49" containerName="sg-core" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.071921 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.075603 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b358095c-30ba-4f90-b627-63650857fc49" (UID: "b358095c-30ba-4f90-b627-63650857fc49"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.076904 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nqhd6" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.077242 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.077693 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.113890 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ppbf\" (UniqueName: \"kubernetes.io/projected/65e6588c-3b7f-4719-beb6-90229629820f-kube-api-access-8ppbf\") pod \"nova-cell0-conductor-0\" (UID: \"65e6588c-3b7f-4719-beb6-90229629820f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.113969 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e6588c-3b7f-4719-beb6-90229629820f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"65e6588c-3b7f-4719-beb6-90229629820f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.114063 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e6588c-3b7f-4719-beb6-90229629820f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"65e6588c-3b7f-4719-beb6-90229629820f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.114150 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.114165 4810 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.114175 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75hx8\" (UniqueName: \"kubernetes.io/projected/b358095c-30ba-4f90-b627-63650857fc49-kube-api-access-75hx8\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.114186 4810 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b358095c-30ba-4f90-b627-63650857fc49-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.116054 4810 scope.go:117] "RemoveContainer" containerID="c213283e307bb78ffccee82c18c4205b43e9c53c95b737f8fce428b65bf54094" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.141730 4810 scope.go:117] "RemoveContainer" containerID="468167a7d908f6d23e0b2d81fbf04b7c71508eb08eaac8f05a5952b968bfd835" Feb 19 15:31:10 crc kubenswrapper[4810]: E0219 15:31:10.142280 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468167a7d908f6d23e0b2d81fbf04b7c71508eb08eaac8f05a5952b968bfd835\": container with ID starting with 468167a7d908f6d23e0b2d81fbf04b7c71508eb08eaac8f05a5952b968bfd835 not found: ID does not exist" containerID="468167a7d908f6d23e0b2d81fbf04b7c71508eb08eaac8f05a5952b968bfd835" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.142339 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468167a7d908f6d23e0b2d81fbf04b7c71508eb08eaac8f05a5952b968bfd835"} err="failed to get container status \"468167a7d908f6d23e0b2d81fbf04b7c71508eb08eaac8f05a5952b968bfd835\": rpc error: code = NotFound desc = could not find container \"468167a7d908f6d23e0b2d81fbf04b7c71508eb08eaac8f05a5952b968bfd835\": container with ID starting with 468167a7d908f6d23e0b2d81fbf04b7c71508eb08eaac8f05a5952b968bfd835 not found: ID does not exist" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.142366 4810 scope.go:117] "RemoveContainer" containerID="961cd98ef99173e92df54e7b424605d95b61a834adb6a8bf1e7c9b237c0afdc7" Feb 19 15:31:10 crc kubenswrapper[4810]: E0219 15:31:10.143672 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"961cd98ef99173e92df54e7b424605d95b61a834adb6a8bf1e7c9b237c0afdc7\": container with ID starting with 961cd98ef99173e92df54e7b424605d95b61a834adb6a8bf1e7c9b237c0afdc7 not found: ID does not exist" containerID="961cd98ef99173e92df54e7b424605d95b61a834adb6a8bf1e7c9b237c0afdc7" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.143710 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"961cd98ef99173e92df54e7b424605d95b61a834adb6a8bf1e7c9b237c0afdc7"} err="failed to get container status \"961cd98ef99173e92df54e7b424605d95b61a834adb6a8bf1e7c9b237c0afdc7\": rpc error: code = NotFound desc = could not find container \"961cd98ef99173e92df54e7b424605d95b61a834adb6a8bf1e7c9b237c0afdc7\": container with ID starting with 961cd98ef99173e92df54e7b424605d95b61a834adb6a8bf1e7c9b237c0afdc7 not found: ID does not exist" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.143730 4810 scope.go:117] "RemoveContainer" containerID="c9c0491a003bacacbdcb1eaf1f7857933abd385ecfb0908efea3150739289951" Feb 19 15:31:10 crc kubenswrapper[4810]: E0219 15:31:10.143997 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9c0491a003bacacbdcb1eaf1f7857933abd385ecfb0908efea3150739289951\": container with ID starting with c9c0491a003bacacbdcb1eaf1f7857933abd385ecfb0908efea3150739289951 not found: ID does not exist" containerID="c9c0491a003bacacbdcb1eaf1f7857933abd385ecfb0908efea3150739289951" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.144016 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9c0491a003bacacbdcb1eaf1f7857933abd385ecfb0908efea3150739289951"} err="failed to get container status \"c9c0491a003bacacbdcb1eaf1f7857933abd385ecfb0908efea3150739289951\": rpc error: code = NotFound desc = could not find container \"c9c0491a003bacacbdcb1eaf1f7857933abd385ecfb0908efea3150739289951\": container with ID starting with c9c0491a003bacacbdcb1eaf1f7857933abd385ecfb0908efea3150739289951 not found: ID does not exist" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.144029 4810 scope.go:117] "RemoveContainer" containerID="c213283e307bb78ffccee82c18c4205b43e9c53c95b737f8fce428b65bf54094" Feb 19 15:31:10 crc kubenswrapper[4810]: E0219 15:31:10.144230 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c213283e307bb78ffccee82c18c4205b43e9c53c95b737f8fce428b65bf54094\": container with ID starting with c213283e307bb78ffccee82c18c4205b43e9c53c95b737f8fce428b65bf54094 not found: ID does not exist" containerID="c213283e307bb78ffccee82c18c4205b43e9c53c95b737f8fce428b65bf54094" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.144251 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c213283e307bb78ffccee82c18c4205b43e9c53c95b737f8fce428b65bf54094"} err="failed to get container status \"c213283e307bb78ffccee82c18c4205b43e9c53c95b737f8fce428b65bf54094\": rpc error: code = NotFound desc = could not find container \"c213283e307bb78ffccee82c18c4205b43e9c53c95b737f8fce428b65bf54094\": container with ID starting with c213283e307bb78ffccee82c18c4205b43e9c53c95b737f8fce428b65bf54094 not found: ID does not exist" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.162390 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b358095c-30ba-4f90-b627-63650857fc49" (UID: "b358095c-30ba-4f90-b627-63650857fc49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.166061 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-config-data" (OuterVolumeSpecName: "config-data") pod "b358095c-30ba-4f90-b627-63650857fc49" (UID: "b358095c-30ba-4f90-b627-63650857fc49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.215724 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e6588c-3b7f-4719-beb6-90229629820f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"65e6588c-3b7f-4719-beb6-90229629820f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.216097 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ppbf\" (UniqueName: \"kubernetes.io/projected/65e6588c-3b7f-4719-beb6-90229629820f-kube-api-access-8ppbf\") pod \"nova-cell0-conductor-0\" (UID: \"65e6588c-3b7f-4719-beb6-90229629820f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.216334 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e6588c-3b7f-4719-beb6-90229629820f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"65e6588c-3b7f-4719-beb6-90229629820f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.216519 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.216601 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b358095c-30ba-4f90-b627-63650857fc49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.220578 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e6588c-3b7f-4719-beb6-90229629820f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"65e6588c-3b7f-4719-beb6-90229629820f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.220702 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e6588c-3b7f-4719-beb6-90229629820f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"65e6588c-3b7f-4719-beb6-90229629820f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.230828 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ppbf\" (UniqueName: \"kubernetes.io/projected/65e6588c-3b7f-4719-beb6-90229629820f-kube-api-access-8ppbf\") pod \"nova-cell0-conductor-0\" (UID: \"65e6588c-3b7f-4719-beb6-90229629820f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.351223 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.370828 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.379343 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.381705 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.384982 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.387789 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.392547 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.406805 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.419694 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-config-data\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.419736 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af3ec395-6313-4094-9597-b52da27a0d7e-run-httpd\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.419777 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af3ec395-6313-4094-9597-b52da27a0d7e-log-httpd\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.419886 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npk42\" (UniqueName: \"kubernetes.io/projected/af3ec395-6313-4094-9597-b52da27a0d7e-kube-api-access-npk42\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.419932 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-scripts\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.419956 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.420017 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.522416 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-config-data\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.522754 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af3ec395-6313-4094-9597-b52da27a0d7e-run-httpd\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.522801 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af3ec395-6313-4094-9597-b52da27a0d7e-log-httpd\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.522857 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npk42\" (UniqueName: \"kubernetes.io/projected/af3ec395-6313-4094-9597-b52da27a0d7e-kube-api-access-npk42\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.522894 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-scripts\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.522917 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.522971 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.523089 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af3ec395-6313-4094-9597-b52da27a0d7e-run-httpd\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.523312 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af3ec395-6313-4094-9597-b52da27a0d7e-log-httpd\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.528544 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.532995 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.534270 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-config-data\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.538124 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-scripts\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.545169 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npk42\" (UniqueName: \"kubernetes.io/projected/af3ec395-6313-4094-9597-b52da27a0d7e-kube-api-access-npk42\") pod \"ceilometer-0\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.699716 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.870683 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 15:31:10 crc kubenswrapper[4810]: I0219 15:31:10.980056 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"65e6588c-3b7f-4719-beb6-90229629820f","Type":"ContainerStarted","Data":"c853cc96422c5081191bd4ef30ca5ee6475e60b6ea25cc44ff8b6bb0a8f8953a"} Feb 19 15:31:11 crc kubenswrapper[4810]: I0219 15:31:11.184257 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:11 crc kubenswrapper[4810]: I0219 15:31:11.452982 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b358095c-30ba-4f90-b627-63650857fc49" path="/var/lib/kubelet/pods/b358095c-30ba-4f90-b627-63650857fc49/volumes" Feb 19 15:31:11 crc kubenswrapper[4810]: I0219 15:31:11.832344 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 15:31:11 crc kubenswrapper[4810]: I0219 15:31:11.867937 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 15:31:11 crc kubenswrapper[4810]: I0219 15:31:11.994018 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"65e6588c-3b7f-4719-beb6-90229629820f","Type":"ContainerStarted","Data":"7c8b78bbba0adeaa15117148c7900fcb09121122590af963dd88c701dfc73adf"} Feb 19 15:31:11 crc kubenswrapper[4810]: I0219 15:31:11.994821 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 15:31:11 crc kubenswrapper[4810]: I0219 15:31:11.997061 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af3ec395-6313-4094-9597-b52da27a0d7e","Type":"ContainerStarted","Data":"308ad8f8b8e7c54aece6c4e1cc25e71fd1252432f00bfbdb0df4940372379f98"} Feb 19 15:31:11 crc kubenswrapper[4810]: I0219 15:31:11.997155 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af3ec395-6313-4094-9597-b52da27a0d7e","Type":"ContainerStarted","Data":"6cec0822ad581a03d19d641977e9a7d547ef54d70d04ff9e2ccd6f292e9245c7"} Feb 19 15:31:11 crc kubenswrapper[4810]: I0219 15:31:11.997202 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af3ec395-6313-4094-9597-b52da27a0d7e","Type":"ContainerStarted","Data":"62ba2b7fb62123b253e68adfb6db44d11bb6e2d13e45773cd9a27fa6ec28a020"} Feb 19 15:31:12 crc kubenswrapper[4810]: I0219 15:31:12.014506 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.014485114 podStartE2EDuration="2.014485114s" podCreationTimestamp="2026-02-19 15:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:31:12.014375461 +0000 UTC m=+1301.496405615" watchObservedRunningTime="2026-02-19 15:31:12.014485114 +0000 UTC m=+1301.496515248" Feb 19 15:31:13 crc kubenswrapper[4810]: I0219 15:31:13.007213 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af3ec395-6313-4094-9597-b52da27a0d7e","Type":"ContainerStarted","Data":"4925cc21f9c39a46d677b62b982c7df70c5e45eb8c42ab96787c52123262f91d"} Feb 19 15:31:15 crc kubenswrapper[4810]: I0219 15:31:15.036774 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af3ec395-6313-4094-9597-b52da27a0d7e","Type":"ContainerStarted","Data":"2adc7f5bee46c61926054ab4e65ab8e1825b109d167cc80842a20a1750f29ed6"} Feb 19 15:31:15 crc kubenswrapper[4810]: I0219 15:31:15.037622 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 15:31:15 crc kubenswrapper[4810]: I0219 15:31:15.081574 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.359282376 podStartE2EDuration="5.081552495s" podCreationTimestamp="2026-02-19 15:31:10 +0000 UTC" firstStartedPulling="2026-02-19 15:31:11.185309705 +0000 UTC m=+1300.667339829" lastFinishedPulling="2026-02-19 15:31:13.907579824 +0000 UTC m=+1303.389609948" observedRunningTime="2026-02-19 15:31:15.065367813 +0000 UTC m=+1304.547397947" watchObservedRunningTime="2026-02-19 15:31:15.081552495 +0000 UTC m=+1304.563582629" Feb 19 15:31:20 crc kubenswrapper[4810]: I0219 15:31:20.438600 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 15:31:20 crc kubenswrapper[4810]: I0219 15:31:20.925612 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-k7bkw"] Feb 19 15:31:20 crc kubenswrapper[4810]: I0219 15:31:20.927051 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:20 crc kubenswrapper[4810]: I0219 15:31:20.934714 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 15:31:20 crc kubenswrapper[4810]: I0219 15:31:20.935149 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 15:31:20 crc kubenswrapper[4810]: I0219 15:31:20.935769 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-scripts\") pod \"nova-cell0-cell-mapping-k7bkw\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:20 crc kubenswrapper[4810]: I0219 15:31:20.936011 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-config-data\") pod \"nova-cell0-cell-mapping-k7bkw\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:20 crc kubenswrapper[4810]: I0219 15:31:20.936133 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhsjr\" (UniqueName: \"kubernetes.io/projected/299a53ac-e7e5-47a3-bf65-df5624b77717-kube-api-access-zhsjr\") pod \"nova-cell0-cell-mapping-k7bkw\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:20 crc kubenswrapper[4810]: I0219 15:31:20.936252 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k7bkw\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:20 crc kubenswrapper[4810]: I0219 15:31:20.943609 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-k7bkw"] Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.038179 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-config-data\") pod \"nova-cell0-cell-mapping-k7bkw\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.038272 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhsjr\" (UniqueName: \"kubernetes.io/projected/299a53ac-e7e5-47a3-bf65-df5624b77717-kube-api-access-zhsjr\") pod \"nova-cell0-cell-mapping-k7bkw\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.038357 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k7bkw\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.038397 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-scripts\") pod \"nova-cell0-cell-mapping-k7bkw\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.046958 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-scripts\") pod \"nova-cell0-cell-mapping-k7bkw\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.047918 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k7bkw\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.048448 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-config-data\") pod \"nova-cell0-cell-mapping-k7bkw\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.068825 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhsjr\" (UniqueName: \"kubernetes.io/projected/299a53ac-e7e5-47a3-bf65-df5624b77717-kube-api-access-zhsjr\") pod \"nova-cell0-cell-mapping-k7bkw\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.138737 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.147510 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.174676 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.214714 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.226560 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.228135 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.237018 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.257613 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.258937 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c8d587-b429-415f-96f6-628924fed084-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4c8d587-b429-415f-96f6-628924fed084\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.259040 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5cs\" (UniqueName: \"kubernetes.io/projected/d9e6d16b-a7c2-4a73-866e-6e068e910d82-kube-api-access-7g5cs\") pod \"nova-scheduler-0\" (UID: \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.259067 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkj9r\" (UniqueName: \"kubernetes.io/projected/b4c8d587-b429-415f-96f6-628924fed084-kube-api-access-qkj9r\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4c8d587-b429-415f-96f6-628924fed084\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.259086 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c8d587-b429-415f-96f6-628924fed084-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4c8d587-b429-415f-96f6-628924fed084\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.259137 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e6d16b-a7c2-4a73-866e-6e068e910d82-config-data\") pod \"nova-scheduler-0\" (UID: \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.259174 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e6d16b-a7c2-4a73-866e-6e068e910d82-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.265549 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.300506 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.302590 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.313555 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.316507 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.348495 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.350632 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.357651 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.361180 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e6d16b-a7c2-4a73-866e-6e068e910d82-config-data\") pod \"nova-scheduler-0\" (UID: \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.364018 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e6d16b-a7c2-4a73-866e-6e068e910d82-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.364413 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.364438 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c8d587-b429-415f-96f6-628924fed084-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4c8d587-b429-415f-96f6-628924fed084\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.364877 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g5cs\" (UniqueName: \"kubernetes.io/projected/d9e6d16b-a7c2-4a73-866e-6e068e910d82-kube-api-access-7g5cs\") pod \"nova-scheduler-0\" (UID: \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.364932 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkj9r\" (UniqueName: \"kubernetes.io/projected/b4c8d587-b429-415f-96f6-628924fed084-kube-api-access-qkj9r\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4c8d587-b429-415f-96f6-628924fed084\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.364976 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c8d587-b429-415f-96f6-628924fed084-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4c8d587-b429-415f-96f6-628924fed084\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.368090 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e6d16b-a7c2-4a73-866e-6e068e910d82-config-data\") pod \"nova-scheduler-0\" (UID: \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.371671 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c8d587-b429-415f-96f6-628924fed084-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4c8d587-b429-415f-96f6-628924fed084\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.375950 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e6d16b-a7c2-4a73-866e-6e068e910d82-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.376358 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c8d587-b429-415f-96f6-628924fed084-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4c8d587-b429-415f-96f6-628924fed084\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.391843 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g5cs\" (UniqueName: \"kubernetes.io/projected/d9e6d16b-a7c2-4a73-866e-6e068e910d82-kube-api-access-7g5cs\") pod \"nova-scheduler-0\" (UID: \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.410727 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkj9r\" (UniqueName: \"kubernetes.io/projected/b4c8d587-b429-415f-96f6-628924fed084-kube-api-access-qkj9r\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4c8d587-b429-415f-96f6-628924fed084\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.466812 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8841a6af-789a-4dd9-81ed-3afc45b255e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.466883 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e8674be-9f7f-438c-afcb-529178b5fa9a-config-data\") pod \"nova-metadata-0\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.466929 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e8674be-9f7f-438c-afcb-529178b5fa9a-logs\") pod \"nova-metadata-0\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.466969 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-945nq\" (UniqueName: \"kubernetes.io/projected/8841a6af-789a-4dd9-81ed-3afc45b255e4-kube-api-access-945nq\") pod \"nova-api-0\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.467629 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2szh6\" (UniqueName: \"kubernetes.io/projected/1e8674be-9f7f-438c-afcb-529178b5fa9a-kube-api-access-2szh6\") pod \"nova-metadata-0\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.467849 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e8674be-9f7f-438c-afcb-529178b5fa9a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.468184 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8841a6af-789a-4dd9-81ed-3afc45b255e4-config-data\") pod \"nova-api-0\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.468209 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8841a6af-789a-4dd9-81ed-3afc45b255e4-logs\") pod \"nova-api-0\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.483654 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cc76f8d79-b9r9k"] Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.485279 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.497159 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cc76f8d79-b9r9k"] Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.497227 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.568910 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-config\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.568973 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8841a6af-789a-4dd9-81ed-3afc45b255e4-config-data\") pod \"nova-api-0\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.568994 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8841a6af-789a-4dd9-81ed-3afc45b255e4-logs\") pod \"nova-api-0\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.569013 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-ovsdbserver-sb\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.569038 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-dns-svc\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.569071 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8841a6af-789a-4dd9-81ed-3afc45b255e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.569098 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e8674be-9f7f-438c-afcb-529178b5fa9a-config-data\") pod \"nova-metadata-0\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.569127 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e8674be-9f7f-438c-afcb-529178b5fa9a-logs\") pod \"nova-metadata-0\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.569145 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-945nq\" (UniqueName: \"kubernetes.io/projected/8841a6af-789a-4dd9-81ed-3afc45b255e4-kube-api-access-945nq\") pod \"nova-api-0\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.569169 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2szh6\" (UniqueName: \"kubernetes.io/projected/1e8674be-9f7f-438c-afcb-529178b5fa9a-kube-api-access-2szh6\") pod \"nova-metadata-0\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.569185 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e8674be-9f7f-438c-afcb-529178b5fa9a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.569205 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-dns-swift-storage-0\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.569233 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdf9n\" (UniqueName: \"kubernetes.io/projected/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-kube-api-access-fdf9n\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.569256 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-ovsdbserver-nb\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.570120 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e8674be-9f7f-438c-afcb-529178b5fa9a-logs\") pod \"nova-metadata-0\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.570693 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8841a6af-789a-4dd9-81ed-3afc45b255e4-logs\") pod \"nova-api-0\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.574506 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8841a6af-789a-4dd9-81ed-3afc45b255e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.574592 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e8674be-9f7f-438c-afcb-529178b5fa9a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.575115 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8841a6af-789a-4dd9-81ed-3afc45b255e4-config-data\") pod \"nova-api-0\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.581163 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e8674be-9f7f-438c-afcb-529178b5fa9a-config-data\") pod \"nova-metadata-0\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.585287 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.594588 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-945nq\" (UniqueName: \"kubernetes.io/projected/8841a6af-789a-4dd9-81ed-3afc45b255e4-kube-api-access-945nq\") pod \"nova-api-0\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.595350 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2szh6\" (UniqueName: \"kubernetes.io/projected/1e8674be-9f7f-438c-afcb-529178b5fa9a-kube-api-access-2szh6\") pod \"nova-metadata-0\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.670712 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-ovsdbserver-sb\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.671001 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-dns-svc\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.671084 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-dns-swift-storage-0\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.671115 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdf9n\" (UniqueName: \"kubernetes.io/projected/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-kube-api-access-fdf9n\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.671143 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-ovsdbserver-nb\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.671192 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-config\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.671961 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-dns-swift-storage-0\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.672063 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-dns-svc\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.672629 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-ovsdbserver-sb\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.672929 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-config\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.675487 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-ovsdbserver-nb\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.690538 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdf9n\" (UniqueName: \"kubernetes.io/projected/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-kube-api-access-fdf9n\") pod \"dnsmasq-dns-7cc76f8d79-b9r9k\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.775982 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.781261 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.812856 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:21 crc kubenswrapper[4810]: I0219 15:31:21.943561 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-k7bkw"] Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.061930 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hk2fs"] Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.067553 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.072056 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.073219 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.076772 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hk2fs"] Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.109483 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpxgr\" (UniqueName: \"kubernetes.io/projected/5f1a5ee7-3792-4f35-a967-80fb96c7df10-kube-api-access-qpxgr\") pod \"nova-cell1-conductor-db-sync-hk2fs\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.110071 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-config-data\") pod \"nova-cell1-conductor-db-sync-hk2fs\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.110163 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-scripts\") pod \"nova-cell1-conductor-db-sync-hk2fs\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.110399 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hk2fs\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.136898 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.157611 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d9e6d16b-a7c2-4a73-866e-6e068e910d82","Type":"ContainerStarted","Data":"8358d2e845bdbf47646ae6969a74848954fb0b3cc77ce5979954030d2a57fe33"} Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.159462 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k7bkw" event={"ID":"299a53ac-e7e5-47a3-bf65-df5624b77717","Type":"ContainerStarted","Data":"692e978a354f5e17fbfce0a777f2418f1242f2cdbcdb95bec183a7615ee2fc90"} Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.214845 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpxgr\" (UniqueName: \"kubernetes.io/projected/5f1a5ee7-3792-4f35-a967-80fb96c7df10-kube-api-access-qpxgr\") pod \"nova-cell1-conductor-db-sync-hk2fs\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.215627 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-scripts\") pod \"nova-cell1-conductor-db-sync-hk2fs\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.215648 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-config-data\") pod \"nova-cell1-conductor-db-sync-hk2fs\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.215716 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hk2fs\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.216290 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.231665 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-scripts\") pod \"nova-cell1-conductor-db-sync-hk2fs\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.232171 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hk2fs\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.232617 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-config-data\") pod \"nova-cell1-conductor-db-sync-hk2fs\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.236539 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpxgr\" (UniqueName: \"kubernetes.io/projected/5f1a5ee7-3792-4f35-a967-80fb96c7df10-kube-api-access-qpxgr\") pod \"nova-cell1-conductor-db-sync-hk2fs\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.391228 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.499048 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.531130 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:31:22 crc kubenswrapper[4810]: I0219 15:31:22.554802 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cc76f8d79-b9r9k"] Feb 19 15:31:22 crc kubenswrapper[4810]: W0219 15:31:22.564128 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd13abd15_5b9b_4e00_984f_9dabbe51ddbc.slice/crio-3782f704a1deb319907b16927aac1f196131400ef56fc6f3b4943495c1de3d0b WatchSource:0}: Error finding container 3782f704a1deb319907b16927aac1f196131400ef56fc6f3b4943495c1de3d0b: Status 404 returned error can't find the container with id 3782f704a1deb319907b16927aac1f196131400ef56fc6f3b4943495c1de3d0b Feb 19 15:31:23 crc kubenswrapper[4810]: I0219 15:31:23.009547 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hk2fs"] Feb 19 15:31:23 crc kubenswrapper[4810]: I0219 15:31:23.178271 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k7bkw" event={"ID":"299a53ac-e7e5-47a3-bf65-df5624b77717","Type":"ContainerStarted","Data":"9c8648b58dedd6b14f6832bd1d2f895ecfd4e781a2433a653d4f48b76efb9fef"} Feb 19 15:31:23 crc kubenswrapper[4810]: I0219 15:31:23.182442 4810 generic.go:334] "Generic (PLEG): container finished" podID="d13abd15-5b9b-4e00-984f-9dabbe51ddbc" containerID="f762a8738d3f47e401158c999773d4f19a8cf6bd6c7936ab82cb0c741248ad3e" exitCode=0 Feb 19 15:31:23 crc kubenswrapper[4810]: I0219 15:31:23.183228 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" event={"ID":"d13abd15-5b9b-4e00-984f-9dabbe51ddbc","Type":"ContainerDied","Data":"f762a8738d3f47e401158c999773d4f19a8cf6bd6c7936ab82cb0c741248ad3e"} Feb 19 15:31:23 crc kubenswrapper[4810]: I0219 15:31:23.183252 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" event={"ID":"d13abd15-5b9b-4e00-984f-9dabbe51ddbc","Type":"ContainerStarted","Data":"3782f704a1deb319907b16927aac1f196131400ef56fc6f3b4943495c1de3d0b"} Feb 19 15:31:23 crc kubenswrapper[4810]: I0219 15:31:23.186231 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b4c8d587-b429-415f-96f6-628924fed084","Type":"ContainerStarted","Data":"4774e9328b610c917f9fd35141fcadf466b7543107fa862abe860e7744c56cb8"} Feb 19 15:31:23 crc kubenswrapper[4810]: I0219 15:31:23.187727 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e8674be-9f7f-438c-afcb-529178b5fa9a","Type":"ContainerStarted","Data":"a598cb61cea96142cc131cec1371fcd736e3d9f078e8978d34c8ba18fa043df1"} Feb 19 15:31:23 crc kubenswrapper[4810]: I0219 15:31:23.203645 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8841a6af-789a-4dd9-81ed-3afc45b255e4","Type":"ContainerStarted","Data":"edb4da08df823fefbaa5fd91c5229d2a05bf23c6e464beaac0148e5443f3fbaf"} Feb 19 15:31:23 crc kubenswrapper[4810]: I0219 15:31:23.204435 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-k7bkw" podStartSLOduration=3.20441259 podStartE2EDuration="3.20441259s" podCreationTimestamp="2026-02-19 15:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:31:23.198565321 +0000 UTC m=+1312.680595445" watchObservedRunningTime="2026-02-19 15:31:23.20441259 +0000 UTC m=+1312.686442724" Feb 19 15:31:24 crc kubenswrapper[4810]: W0219 15:31:24.081398 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f1a5ee7_3792_4f35_a967_80fb96c7df10.slice/crio-901147ab356b116732cf4aeab1e9b5ea5ce5785575b20bc7bda82db966cdb603 WatchSource:0}: Error finding container 901147ab356b116732cf4aeab1e9b5ea5ce5785575b20bc7bda82db966cdb603: Status 404 returned error can't find the container with id 901147ab356b116732cf4aeab1e9b5ea5ce5785575b20bc7bda82db966cdb603 Feb 19 15:31:24 crc kubenswrapper[4810]: I0219 15:31:24.221970 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hk2fs" event={"ID":"5f1a5ee7-3792-4f35-a967-80fb96c7df10","Type":"ContainerStarted","Data":"901147ab356b116732cf4aeab1e9b5ea5ce5785575b20bc7bda82db966cdb603"} Feb 19 15:31:24 crc kubenswrapper[4810]: I0219 15:31:24.674866 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:24 crc kubenswrapper[4810]: I0219 15:31:24.733154 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.318992 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b4c8d587-b429-415f-96f6-628924fed084","Type":"ContainerStarted","Data":"b7e5ea663f35522cf345ee4992587f93bc18717e336dd09c7687290a99a11aa3"} Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.321175 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hk2fs" event={"ID":"5f1a5ee7-3792-4f35-a967-80fb96c7df10","Type":"ContainerStarted","Data":"45859d708bbdd95af868748506ae358c82e96df75fa08cfe41661e0323e54c01"} Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.319051 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b4c8d587-b429-415f-96f6-628924fed084" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b7e5ea663f35522cf345ee4992587f93bc18717e336dd09c7687290a99a11aa3" gracePeriod=30 Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.327510 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e8674be-9f7f-438c-afcb-529178b5fa9a","Type":"ContainerStarted","Data":"67d7b1b2fe05fdf05040100d731bde7bfd4f1ee47f3a5b6f3aa77c49f45ebc0c"} Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.327570 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e8674be-9f7f-438c-afcb-529178b5fa9a","Type":"ContainerStarted","Data":"bf2b72419abc5b9d1dc324265338d9686d9c4dd72a0204301ce8a03dc9ab3fd8"} Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.327619 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e8674be-9f7f-438c-afcb-529178b5fa9a" containerName="nova-metadata-log" containerID="cri-o://bf2b72419abc5b9d1dc324265338d9686d9c4dd72a0204301ce8a03dc9ab3fd8" gracePeriod=30 Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.327651 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e8674be-9f7f-438c-afcb-529178b5fa9a" containerName="nova-metadata-metadata" containerID="cri-o://67d7b1b2fe05fdf05040100d731bde7bfd4f1ee47f3a5b6f3aa77c49f45ebc0c" gracePeriod=30 Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.334682 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8841a6af-789a-4dd9-81ed-3afc45b255e4","Type":"ContainerStarted","Data":"c77557ffc93a6c8ce5aa2b9530c80c11dbadbea58a331e6c0ce538bfc266ec9b"} Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.334737 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8841a6af-789a-4dd9-81ed-3afc45b255e4","Type":"ContainerStarted","Data":"926de79f3f2ffb9fa64a65d58129228f7afa6cf976838a6de05084ca0e762fe7"} Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.342952 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d9e6d16b-a7c2-4a73-866e-6e068e910d82","Type":"ContainerStarted","Data":"6b9c2cfe7570d6ebf3ef1994a77aa29a1ffc94791dbe2f685217ccc99624a14f"} Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.352460 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" event={"ID":"d13abd15-5b9b-4e00-984f-9dabbe51ddbc","Type":"ContainerStarted","Data":"968b59a59d30b2d92d2973ce319917f2d174a7454d4003dff6b2c557a24c3a76"} Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.352803 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.368120 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.530871095 podStartE2EDuration="5.368104209s" podCreationTimestamp="2026-02-19 15:31:21 +0000 UTC" firstStartedPulling="2026-02-19 15:31:22.231168255 +0000 UTC m=+1311.713198379" lastFinishedPulling="2026-02-19 15:31:25.068401369 +0000 UTC m=+1314.550431493" observedRunningTime="2026-02-19 15:31:26.359653614 +0000 UTC m=+1315.841683738" watchObservedRunningTime="2026-02-19 15:31:26.368104209 +0000 UTC m=+1315.850134333" Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.377100 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.87216789 podStartE2EDuration="5.377080918s" podCreationTimestamp="2026-02-19 15:31:21 +0000 UTC" firstStartedPulling="2026-02-19 15:31:22.561583523 +0000 UTC m=+1312.043613647" lastFinishedPulling="2026-02-19 15:31:25.066496551 +0000 UTC m=+1314.548526675" observedRunningTime="2026-02-19 15:31:26.374787439 +0000 UTC m=+1315.856817593" watchObservedRunningTime="2026-02-19 15:31:26.377080918 +0000 UTC m=+1315.859111052" Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.409806 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.748686288 podStartE2EDuration="5.409789s" podCreationTimestamp="2026-02-19 15:31:21 +0000 UTC" firstStartedPulling="2026-02-19 15:31:22.406603159 +0000 UTC m=+1311.888633283" lastFinishedPulling="2026-02-19 15:31:25.067705871 +0000 UTC m=+1314.549735995" observedRunningTime="2026-02-19 15:31:26.404086405 +0000 UTC m=+1315.886116529" watchObservedRunningTime="2026-02-19 15:31:26.409789 +0000 UTC m=+1315.891819134" Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.433777 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.5172332280000003 podStartE2EDuration="5.43376291s" podCreationTimestamp="2026-02-19 15:31:21 +0000 UTC" firstStartedPulling="2026-02-19 15:31:22.140027576 +0000 UTC m=+1311.622057700" lastFinishedPulling="2026-02-19 15:31:25.056557258 +0000 UTC m=+1314.538587382" observedRunningTime="2026-02-19 15:31:26.430589339 +0000 UTC m=+1315.912619463" watchObservedRunningTime="2026-02-19 15:31:26.43376291 +0000 UTC m=+1315.915793034" Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.472529 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-hk2fs" podStartSLOduration=4.472506926 podStartE2EDuration="4.472506926s" podCreationTimestamp="2026-02-19 15:31:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:31:26.470101345 +0000 UTC m=+1315.952131479" watchObservedRunningTime="2026-02-19 15:31:26.472506926 +0000 UTC m=+1315.954537050" Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.479308 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" podStartSLOduration=5.479291638 podStartE2EDuration="5.479291638s" podCreationTimestamp="2026-02-19 15:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:31:26.456650282 +0000 UTC m=+1315.938680426" watchObservedRunningTime="2026-02-19 15:31:26.479291638 +0000 UTC m=+1315.961321762" Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.497819 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.586976 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.781823 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 15:31:26 crc kubenswrapper[4810]: I0219 15:31:26.781880 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.371144 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e8674be-9f7f-438c-afcb-529178b5fa9a" containerID="67d7b1b2fe05fdf05040100d731bde7bfd4f1ee47f3a5b6f3aa77c49f45ebc0c" exitCode=0 Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.371474 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e8674be-9f7f-438c-afcb-529178b5fa9a" containerID="bf2b72419abc5b9d1dc324265338d9686d9c4dd72a0204301ce8a03dc9ab3fd8" exitCode=143 Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.371218 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e8674be-9f7f-438c-afcb-529178b5fa9a","Type":"ContainerDied","Data":"67d7b1b2fe05fdf05040100d731bde7bfd4f1ee47f3a5b6f3aa77c49f45ebc0c"} Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.372092 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e8674be-9f7f-438c-afcb-529178b5fa9a","Type":"ContainerDied","Data":"bf2b72419abc5b9d1dc324265338d9686d9c4dd72a0204301ce8a03dc9ab3fd8"} Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.372114 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e8674be-9f7f-438c-afcb-529178b5fa9a","Type":"ContainerDied","Data":"a598cb61cea96142cc131cec1371fcd736e3d9f078e8978d34c8ba18fa043df1"} Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.372126 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a598cb61cea96142cc131cec1371fcd736e3d9f078e8978d34c8ba18fa043df1" Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.498628 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.558543 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2szh6\" (UniqueName: \"kubernetes.io/projected/1e8674be-9f7f-438c-afcb-529178b5fa9a-kube-api-access-2szh6\") pod \"1e8674be-9f7f-438c-afcb-529178b5fa9a\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.558719 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e8674be-9f7f-438c-afcb-529178b5fa9a-combined-ca-bundle\") pod \"1e8674be-9f7f-438c-afcb-529178b5fa9a\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.558851 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e8674be-9f7f-438c-afcb-529178b5fa9a-logs\") pod \"1e8674be-9f7f-438c-afcb-529178b5fa9a\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.558906 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e8674be-9f7f-438c-afcb-529178b5fa9a-config-data\") pod \"1e8674be-9f7f-438c-afcb-529178b5fa9a\" (UID: \"1e8674be-9f7f-438c-afcb-529178b5fa9a\") " Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.559278 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e8674be-9f7f-438c-afcb-529178b5fa9a-logs" (OuterVolumeSpecName: "logs") pod "1e8674be-9f7f-438c-afcb-529178b5fa9a" (UID: "1e8674be-9f7f-438c-afcb-529178b5fa9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.559829 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e8674be-9f7f-438c-afcb-529178b5fa9a-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.565190 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e8674be-9f7f-438c-afcb-529178b5fa9a-kube-api-access-2szh6" (OuterVolumeSpecName: "kube-api-access-2szh6") pod "1e8674be-9f7f-438c-afcb-529178b5fa9a" (UID: "1e8674be-9f7f-438c-afcb-529178b5fa9a"). InnerVolumeSpecName "kube-api-access-2szh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.598434 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e8674be-9f7f-438c-afcb-529178b5fa9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e8674be-9f7f-438c-afcb-529178b5fa9a" (UID: "1e8674be-9f7f-438c-afcb-529178b5fa9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.600987 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e8674be-9f7f-438c-afcb-529178b5fa9a-config-data" (OuterVolumeSpecName: "config-data") pod "1e8674be-9f7f-438c-afcb-529178b5fa9a" (UID: "1e8674be-9f7f-438c-afcb-529178b5fa9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.661679 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e8674be-9f7f-438c-afcb-529178b5fa9a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.661721 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2szh6\" (UniqueName: \"kubernetes.io/projected/1e8674be-9f7f-438c-afcb-529178b5fa9a-kube-api-access-2szh6\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:27 crc kubenswrapper[4810]: I0219 15:31:27.661733 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e8674be-9f7f-438c-afcb-529178b5fa9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.387103 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.464231 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.488156 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.497467 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:28 crc kubenswrapper[4810]: E0219 15:31:28.497976 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8674be-9f7f-438c-afcb-529178b5fa9a" containerName="nova-metadata-log" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.498000 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8674be-9f7f-438c-afcb-529178b5fa9a" containerName="nova-metadata-log" Feb 19 15:31:28 crc kubenswrapper[4810]: E0219 15:31:28.498038 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8674be-9f7f-438c-afcb-529178b5fa9a" containerName="nova-metadata-metadata" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.498048 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8674be-9f7f-438c-afcb-529178b5fa9a" containerName="nova-metadata-metadata" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.498273 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e8674be-9f7f-438c-afcb-529178b5fa9a" containerName="nova-metadata-metadata" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.498305 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e8674be-9f7f-438c-afcb-529178b5fa9a" containerName="nova-metadata-log" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.499593 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.502892 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.505890 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.542595 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.580840 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbpsq\" (UniqueName: \"kubernetes.io/projected/975e5394-f9a1-428e-90e9-6e1ea9c757e1-kube-api-access-pbpsq\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.581142 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/975e5394-f9a1-428e-90e9-6e1ea9c757e1-logs\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.581188 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-config-data\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.581480 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.581652 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.683191 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.683292 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.683379 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbpsq\" (UniqueName: \"kubernetes.io/projected/975e5394-f9a1-428e-90e9-6e1ea9c757e1-kube-api-access-pbpsq\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.683425 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/975e5394-f9a1-428e-90e9-6e1ea9c757e1-logs\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.683466 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-config-data\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.684959 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/975e5394-f9a1-428e-90e9-6e1ea9c757e1-logs\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.689280 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.690188 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-config-data\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.690437 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.708396 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbpsq\" (UniqueName: \"kubernetes.io/projected/975e5394-f9a1-428e-90e9-6e1ea9c757e1-kube-api-access-pbpsq\") pod \"nova-metadata-0\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " pod="openstack/nova-metadata-0" Feb 19 15:31:28 crc kubenswrapper[4810]: I0219 15:31:28.827078 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:31:29 crc kubenswrapper[4810]: I0219 15:31:29.347281 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:29 crc kubenswrapper[4810]: I0219 15:31:29.405443 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"975e5394-f9a1-428e-90e9-6e1ea9c757e1","Type":"ContainerStarted","Data":"0be93d0eb552143c9c43a40adef0cc77ef54b17f48e67effc28863e9557f11b4"} Feb 19 15:31:29 crc kubenswrapper[4810]: I0219 15:31:29.450550 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e8674be-9f7f-438c-afcb-529178b5fa9a" path="/var/lib/kubelet/pods/1e8674be-9f7f-438c-afcb-529178b5fa9a/volumes" Feb 19 15:31:30 crc kubenswrapper[4810]: I0219 15:31:30.431434 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"975e5394-f9a1-428e-90e9-6e1ea9c757e1","Type":"ContainerStarted","Data":"9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6"} Feb 19 15:31:30 crc kubenswrapper[4810]: I0219 15:31:30.431774 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"975e5394-f9a1-428e-90e9-6e1ea9c757e1","Type":"ContainerStarted","Data":"f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9"} Feb 19 15:31:30 crc kubenswrapper[4810]: I0219 15:31:30.456479 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.456457717 podStartE2EDuration="2.456457717s" podCreationTimestamp="2026-02-19 15:31:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:31:30.455438501 +0000 UTC m=+1319.937468625" watchObservedRunningTime="2026-02-19 15:31:30.456457717 +0000 UTC m=+1319.938487841" Feb 19 15:31:31 crc kubenswrapper[4810]: I0219 15:31:31.456045 4810 generic.go:334] "Generic (PLEG): container finished" podID="299a53ac-e7e5-47a3-bf65-df5624b77717" containerID="9c8648b58dedd6b14f6832bd1d2f895ecfd4e781a2433a653d4f48b76efb9fef" exitCode=0 Feb 19 15:31:31 crc kubenswrapper[4810]: I0219 15:31:31.459517 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k7bkw" event={"ID":"299a53ac-e7e5-47a3-bf65-df5624b77717","Type":"ContainerDied","Data":"9c8648b58dedd6b14f6832bd1d2f895ecfd4e781a2433a653d4f48b76efb9fef"} Feb 19 15:31:31 crc kubenswrapper[4810]: I0219 15:31:31.498613 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 15:31:31 crc kubenswrapper[4810]: I0219 15:31:31.526885 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 15:31:31 crc kubenswrapper[4810]: I0219 15:31:31.777400 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 15:31:31 crc kubenswrapper[4810]: I0219 15:31:31.777439 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 15:31:31 crc kubenswrapper[4810]: I0219 15:31:31.815060 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:31:31 crc kubenswrapper[4810]: I0219 15:31:31.883956 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c58b86477-9tbw7"] Feb 19 15:31:31 crc kubenswrapper[4810]: I0219 15:31:31.884520 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" podUID="4a48946e-058c-4395-bbad-5effb50b2228" containerName="dnsmasq-dns" containerID="cri-o://c249e977cefa0135aa004a3d9624e2b5787cc21f20239233e79602a670cf0acb" gracePeriod=10 Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.482047 4810 generic.go:334] "Generic (PLEG): container finished" podID="4a48946e-058c-4395-bbad-5effb50b2228" containerID="c249e977cefa0135aa004a3d9624e2b5787cc21f20239233e79602a670cf0acb" exitCode=0 Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.482694 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" event={"ID":"4a48946e-058c-4395-bbad-5effb50b2228","Type":"ContainerDied","Data":"c249e977cefa0135aa004a3d9624e2b5787cc21f20239233e79602a670cf0acb"} Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.482721 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" event={"ID":"4a48946e-058c-4395-bbad-5effb50b2228","Type":"ContainerDied","Data":"bec8edc98672f19301835694ed5c49318c10a4c5634dfa9bc2728f6b7541a7a3"} Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.482732 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bec8edc98672f19301835694ed5c49318c10a4c5634dfa9bc2728f6b7541a7a3" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.488151 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.534082 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.567313 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-dns-swift-storage-0\") pod \"4a48946e-058c-4395-bbad-5effb50b2228\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.567472 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-ovsdbserver-sb\") pod \"4a48946e-058c-4395-bbad-5effb50b2228\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.567503 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-ovsdbserver-nb\") pod \"4a48946e-058c-4395-bbad-5effb50b2228\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.567578 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-dns-svc\") pod \"4a48946e-058c-4395-bbad-5effb50b2228\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.567632 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2hfc\" (UniqueName: \"kubernetes.io/projected/4a48946e-058c-4395-bbad-5effb50b2228-kube-api-access-v2hfc\") pod \"4a48946e-058c-4395-bbad-5effb50b2228\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.567659 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-config\") pod \"4a48946e-058c-4395-bbad-5effb50b2228\" (UID: \"4a48946e-058c-4395-bbad-5effb50b2228\") " Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.618290 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a48946e-058c-4395-bbad-5effb50b2228-kube-api-access-v2hfc" (OuterVolumeSpecName: "kube-api-access-v2hfc") pod "4a48946e-058c-4395-bbad-5effb50b2228" (UID: "4a48946e-058c-4395-bbad-5effb50b2228"). InnerVolumeSpecName "kube-api-access-v2hfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.646862 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a48946e-058c-4395-bbad-5effb50b2228" (UID: "4a48946e-058c-4395-bbad-5effb50b2228"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.646878 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a48946e-058c-4395-bbad-5effb50b2228" (UID: "4a48946e-058c-4395-bbad-5effb50b2228"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.649272 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a48946e-058c-4395-bbad-5effb50b2228" (UID: "4a48946e-058c-4395-bbad-5effb50b2228"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.672008 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.672038 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.672047 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.672057 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2hfc\" (UniqueName: \"kubernetes.io/projected/4a48946e-058c-4395-bbad-5effb50b2228-kube-api-access-v2hfc\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.681126 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4a48946e-058c-4395-bbad-5effb50b2228" (UID: "4a48946e-058c-4395-bbad-5effb50b2228"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.687362 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-config" (OuterVolumeSpecName: "config") pod "4a48946e-058c-4395-bbad-5effb50b2228" (UID: "4a48946e-058c-4395-bbad-5effb50b2228"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.774054 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.774387 4810 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a48946e-058c-4395-bbad-5effb50b2228-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.816736 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.859604 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8841a6af-789a-4dd9-81ed-3afc45b255e4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.859645 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8841a6af-789a-4dd9-81ed-3afc45b255e4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.876191 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-config-data\") pod \"299a53ac-e7e5-47a3-bf65-df5624b77717\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.876256 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-combined-ca-bundle\") pod \"299a53ac-e7e5-47a3-bf65-df5624b77717\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.876418 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-scripts\") pod \"299a53ac-e7e5-47a3-bf65-df5624b77717\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.876507 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhsjr\" (UniqueName: \"kubernetes.io/projected/299a53ac-e7e5-47a3-bf65-df5624b77717-kube-api-access-zhsjr\") pod \"299a53ac-e7e5-47a3-bf65-df5624b77717\" (UID: \"299a53ac-e7e5-47a3-bf65-df5624b77717\") " Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.880800 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/299a53ac-e7e5-47a3-bf65-df5624b77717-kube-api-access-zhsjr" (OuterVolumeSpecName: "kube-api-access-zhsjr") pod "299a53ac-e7e5-47a3-bf65-df5624b77717" (UID: "299a53ac-e7e5-47a3-bf65-df5624b77717"). InnerVolumeSpecName "kube-api-access-zhsjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.880966 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-scripts" (OuterVolumeSpecName: "scripts") pod "299a53ac-e7e5-47a3-bf65-df5624b77717" (UID: "299a53ac-e7e5-47a3-bf65-df5624b77717"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.907036 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-config-data" (OuterVolumeSpecName: "config-data") pod "299a53ac-e7e5-47a3-bf65-df5624b77717" (UID: "299a53ac-e7e5-47a3-bf65-df5624b77717"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.914663 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "299a53ac-e7e5-47a3-bf65-df5624b77717" (UID: "299a53ac-e7e5-47a3-bf65-df5624b77717"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.979736 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhsjr\" (UniqueName: \"kubernetes.io/projected/299a53ac-e7e5-47a3-bf65-df5624b77717-kube-api-access-zhsjr\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.979773 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.979783 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:32 crc kubenswrapper[4810]: I0219 15:31:32.979792 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/299a53ac-e7e5-47a3-bf65-df5624b77717-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.494936 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k7bkw" event={"ID":"299a53ac-e7e5-47a3-bf65-df5624b77717","Type":"ContainerDied","Data":"692e978a354f5e17fbfce0a777f2418f1242f2cdbcdb95bec183a7615ee2fc90"} Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.494983 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="692e978a354f5e17fbfce0a777f2418f1242f2cdbcdb95bec183a7615ee2fc90" Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.495060 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k7bkw" Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.495100 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c58b86477-9tbw7" Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.522502 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c58b86477-9tbw7"] Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.558153 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c58b86477-9tbw7"] Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.685612 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.688333 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8841a6af-789a-4dd9-81ed-3afc45b255e4" containerName="nova-api-log" containerID="cri-o://926de79f3f2ffb9fa64a65d58129228f7afa6cf976838a6de05084ca0e762fe7" gracePeriod=30 Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.689044 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8841a6af-789a-4dd9-81ed-3afc45b255e4" containerName="nova-api-api" containerID="cri-o://c77557ffc93a6c8ce5aa2b9530c80c11dbadbea58a331e6c0ce538bfc266ec9b" gracePeriod=30 Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.705047 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.705357 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="975e5394-f9a1-428e-90e9-6e1ea9c757e1" containerName="nova-metadata-log" containerID="cri-o://f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9" gracePeriod=30 Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.705798 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="975e5394-f9a1-428e-90e9-6e1ea9c757e1" containerName="nova-metadata-metadata" containerID="cri-o://9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6" gracePeriod=30 Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.718208 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.827825 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 15:31:33 crc kubenswrapper[4810]: I0219 15:31:33.827863 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.256293 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.303525 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-combined-ca-bundle\") pod \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.303640 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbpsq\" (UniqueName: \"kubernetes.io/projected/975e5394-f9a1-428e-90e9-6e1ea9c757e1-kube-api-access-pbpsq\") pod \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.303726 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-nova-metadata-tls-certs\") pod \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.303760 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/975e5394-f9a1-428e-90e9-6e1ea9c757e1-logs\") pod \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.303804 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-config-data\") pod \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\" (UID: \"975e5394-f9a1-428e-90e9-6e1ea9c757e1\") " Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.305347 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/975e5394-f9a1-428e-90e9-6e1ea9c757e1-logs" (OuterVolumeSpecName: "logs") pod "975e5394-f9a1-428e-90e9-6e1ea9c757e1" (UID: "975e5394-f9a1-428e-90e9-6e1ea9c757e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.308799 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/975e5394-f9a1-428e-90e9-6e1ea9c757e1-kube-api-access-pbpsq" (OuterVolumeSpecName: "kube-api-access-pbpsq") pod "975e5394-f9a1-428e-90e9-6e1ea9c757e1" (UID: "975e5394-f9a1-428e-90e9-6e1ea9c757e1"). InnerVolumeSpecName "kube-api-access-pbpsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.336094 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-config-data" (OuterVolumeSpecName: "config-data") pod "975e5394-f9a1-428e-90e9-6e1ea9c757e1" (UID: "975e5394-f9a1-428e-90e9-6e1ea9c757e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.354275 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "975e5394-f9a1-428e-90e9-6e1ea9c757e1" (UID: "975e5394-f9a1-428e-90e9-6e1ea9c757e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.369081 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "975e5394-f9a1-428e-90e9-6e1ea9c757e1" (UID: "975e5394-f9a1-428e-90e9-6e1ea9c757e1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.406780 4810 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.406816 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/975e5394-f9a1-428e-90e9-6e1ea9c757e1-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.406831 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.406843 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975e5394-f9a1-428e-90e9-6e1ea9c757e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.406854 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbpsq\" (UniqueName: \"kubernetes.io/projected/975e5394-f9a1-428e-90e9-6e1ea9c757e1-kube-api-access-pbpsq\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.512111 4810 generic.go:334] "Generic (PLEG): container finished" podID="8841a6af-789a-4dd9-81ed-3afc45b255e4" containerID="926de79f3f2ffb9fa64a65d58129228f7afa6cf976838a6de05084ca0e762fe7" exitCode=143 Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.512182 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8841a6af-789a-4dd9-81ed-3afc45b255e4","Type":"ContainerDied","Data":"926de79f3f2ffb9fa64a65d58129228f7afa6cf976838a6de05084ca0e762fe7"} Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.514029 4810 generic.go:334] "Generic (PLEG): container finished" podID="975e5394-f9a1-428e-90e9-6e1ea9c757e1" containerID="9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6" exitCode=0 Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.514054 4810 generic.go:334] "Generic (PLEG): container finished" podID="975e5394-f9a1-428e-90e9-6e1ea9c757e1" containerID="f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9" exitCode=143 Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.514086 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"975e5394-f9a1-428e-90e9-6e1ea9c757e1","Type":"ContainerDied","Data":"9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6"} Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.514104 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"975e5394-f9a1-428e-90e9-6e1ea9c757e1","Type":"ContainerDied","Data":"f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9"} Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.514116 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"975e5394-f9a1-428e-90e9-6e1ea9c757e1","Type":"ContainerDied","Data":"0be93d0eb552143c9c43a40adef0cc77ef54b17f48e67effc28863e9557f11b4"} Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.514133 4810 scope.go:117] "RemoveContainer" containerID="9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.514272 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.519551 4810 generic.go:334] "Generic (PLEG): container finished" podID="5f1a5ee7-3792-4f35-a967-80fb96c7df10" containerID="45859d708bbdd95af868748506ae358c82e96df75fa08cfe41661e0323e54c01" exitCode=0 Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.519629 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hk2fs" event={"ID":"5f1a5ee7-3792-4f35-a967-80fb96c7df10","Type":"ContainerDied","Data":"45859d708bbdd95af868748506ae358c82e96df75fa08cfe41661e0323e54c01"} Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.519739 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d9e6d16b-a7c2-4a73-866e-6e068e910d82" containerName="nova-scheduler-scheduler" containerID="cri-o://6b9c2cfe7570d6ebf3ef1994a77aa29a1ffc94791dbe2f685217ccc99624a14f" gracePeriod=30 Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.549967 4810 scope.go:117] "RemoveContainer" containerID="f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.572271 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.581130 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.586500 4810 scope.go:117] "RemoveContainer" containerID="9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6" Feb 19 15:31:34 crc kubenswrapper[4810]: E0219 15:31:34.587043 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6\": container with ID starting with 9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6 not found: ID does not exist" containerID="9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.587098 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6"} err="failed to get container status \"9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6\": rpc error: code = NotFound desc = could not find container \"9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6\": container with ID starting with 9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6 not found: ID does not exist" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.587132 4810 scope.go:117] "RemoveContainer" containerID="f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9" Feb 19 15:31:34 crc kubenswrapper[4810]: E0219 15:31:34.587614 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9\": container with ID starting with f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9 not found: ID does not exist" containerID="f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.587635 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9"} err="failed to get container status \"f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9\": rpc error: code = NotFound desc = could not find container \"f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9\": container with ID starting with f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9 not found: ID does not exist" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.587649 4810 scope.go:117] "RemoveContainer" containerID="9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.588074 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6"} err="failed to get container status \"9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6\": rpc error: code = NotFound desc = could not find container \"9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6\": container with ID starting with 9c0c8232f43851bc134a132b71a1fe65041c1ed090af3645c3b5f0de510160c6 not found: ID does not exist" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.588110 4810 scope.go:117] "RemoveContainer" containerID="f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.588364 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9"} err="failed to get container status \"f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9\": rpc error: code = NotFound desc = could not find container \"f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9\": container with ID starting with f66cc085b0b58fa637f1e108fa5e008ecf9029e272faad0d2e511d170502fca9 not found: ID does not exist" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.616545 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:34 crc kubenswrapper[4810]: E0219 15:31:34.616999 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a48946e-058c-4395-bbad-5effb50b2228" containerName="init" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.617015 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a48946e-058c-4395-bbad-5effb50b2228" containerName="init" Feb 19 15:31:34 crc kubenswrapper[4810]: E0219 15:31:34.617030 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a48946e-058c-4395-bbad-5effb50b2228" containerName="dnsmasq-dns" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.617036 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a48946e-058c-4395-bbad-5effb50b2228" containerName="dnsmasq-dns" Feb 19 15:31:34 crc kubenswrapper[4810]: E0219 15:31:34.617048 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975e5394-f9a1-428e-90e9-6e1ea9c757e1" containerName="nova-metadata-log" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.617054 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="975e5394-f9a1-428e-90e9-6e1ea9c757e1" containerName="nova-metadata-log" Feb 19 15:31:34 crc kubenswrapper[4810]: E0219 15:31:34.617067 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="299a53ac-e7e5-47a3-bf65-df5624b77717" containerName="nova-manage" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.617073 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="299a53ac-e7e5-47a3-bf65-df5624b77717" containerName="nova-manage" Feb 19 15:31:34 crc kubenswrapper[4810]: E0219 15:31:34.617098 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975e5394-f9a1-428e-90e9-6e1ea9c757e1" containerName="nova-metadata-metadata" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.617104 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="975e5394-f9a1-428e-90e9-6e1ea9c757e1" containerName="nova-metadata-metadata" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.617275 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="975e5394-f9a1-428e-90e9-6e1ea9c757e1" containerName="nova-metadata-metadata" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.617287 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a48946e-058c-4395-bbad-5effb50b2228" containerName="dnsmasq-dns" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.617297 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="975e5394-f9a1-428e-90e9-6e1ea9c757e1" containerName="nova-metadata-log" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.617307 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="299a53ac-e7e5-47a3-bf65-df5624b77717" containerName="nova-manage" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.618383 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.620690 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.621821 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.626835 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.715778 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-config-data\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.715934 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4dgr\" (UniqueName: \"kubernetes.io/projected/deba4978-0921-4c62-9452-9a47fe24feb7-kube-api-access-d4dgr\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.716114 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.716380 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.716400 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deba4978-0921-4c62-9452-9a47fe24feb7-logs\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.818146 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4dgr\" (UniqueName: \"kubernetes.io/projected/deba4978-0921-4c62-9452-9a47fe24feb7-kube-api-access-d4dgr\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.818405 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.818497 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.818528 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deba4978-0921-4c62-9452-9a47fe24feb7-logs\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.818614 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-config-data\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.819093 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deba4978-0921-4c62-9452-9a47fe24feb7-logs\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.824722 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.827181 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.827494 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-config-data\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.846584 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4dgr\" (UniqueName: \"kubernetes.io/projected/deba4978-0921-4c62-9452-9a47fe24feb7-kube-api-access-d4dgr\") pod \"nova-metadata-0\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " pod="openstack/nova-metadata-0" Feb 19 15:31:34 crc kubenswrapper[4810]: I0219 15:31:34.934116 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:31:35 crc kubenswrapper[4810]: I0219 15:31:35.449478 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a48946e-058c-4395-bbad-5effb50b2228" path="/var/lib/kubelet/pods/4a48946e-058c-4395-bbad-5effb50b2228/volumes" Feb 19 15:31:35 crc kubenswrapper[4810]: I0219 15:31:35.450553 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="975e5394-f9a1-428e-90e9-6e1ea9c757e1" path="/var/lib/kubelet/pods/975e5394-f9a1-428e-90e9-6e1ea9c757e1/volumes" Feb 19 15:31:35 crc kubenswrapper[4810]: I0219 15:31:35.484984 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:31:35 crc kubenswrapper[4810]: I0219 15:31:35.532152 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"deba4978-0921-4c62-9452-9a47fe24feb7","Type":"ContainerStarted","Data":"9c0cc70373e1f5deebdb6c556b83e6477c32297bae250fc5ee8217259477409e"} Feb 19 15:31:35 crc kubenswrapper[4810]: I0219 15:31:35.848701 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.049078 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-config-data\") pod \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.049177 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-scripts\") pod \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.049453 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-combined-ca-bundle\") pod \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.049594 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpxgr\" (UniqueName: \"kubernetes.io/projected/5f1a5ee7-3792-4f35-a967-80fb96c7df10-kube-api-access-qpxgr\") pod \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\" (UID: \"5f1a5ee7-3792-4f35-a967-80fb96c7df10\") " Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.053915 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1a5ee7-3792-4f35-a967-80fb96c7df10-kube-api-access-qpxgr" (OuterVolumeSpecName: "kube-api-access-qpxgr") pod "5f1a5ee7-3792-4f35-a967-80fb96c7df10" (UID: "5f1a5ee7-3792-4f35-a967-80fb96c7df10"). InnerVolumeSpecName "kube-api-access-qpxgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.054049 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-scripts" (OuterVolumeSpecName: "scripts") pod "5f1a5ee7-3792-4f35-a967-80fb96c7df10" (UID: "5f1a5ee7-3792-4f35-a967-80fb96c7df10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.079666 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-config-data" (OuterVolumeSpecName: "config-data") pod "5f1a5ee7-3792-4f35-a967-80fb96c7df10" (UID: "5f1a5ee7-3792-4f35-a967-80fb96c7df10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.094521 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f1a5ee7-3792-4f35-a967-80fb96c7df10" (UID: "5f1a5ee7-3792-4f35-a967-80fb96c7df10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.152312 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.152381 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpxgr\" (UniqueName: \"kubernetes.io/projected/5f1a5ee7-3792-4f35-a967-80fb96c7df10-kube-api-access-qpxgr\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.152397 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.152408 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f1a5ee7-3792-4f35-a967-80fb96c7df10-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:36 crc kubenswrapper[4810]: E0219 15:31:36.499769 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b9c2cfe7570d6ebf3ef1994a77aa29a1ffc94791dbe2f685217ccc99624a14f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 15:31:36 crc kubenswrapper[4810]: E0219 15:31:36.502143 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b9c2cfe7570d6ebf3ef1994a77aa29a1ffc94791dbe2f685217ccc99624a14f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 15:31:36 crc kubenswrapper[4810]: E0219 15:31:36.503844 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b9c2cfe7570d6ebf3ef1994a77aa29a1ffc94791dbe2f685217ccc99624a14f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 15:31:36 crc kubenswrapper[4810]: E0219 15:31:36.503933 4810 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d9e6d16b-a7c2-4a73-866e-6e068e910d82" containerName="nova-scheduler-scheduler" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.547212 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"deba4978-0921-4c62-9452-9a47fe24feb7","Type":"ContainerStarted","Data":"3cd69082a7ac9af779601af9a54ce467c63853d8132eb18f4aa9ff5eb2315f95"} Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.547260 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"deba4978-0921-4c62-9452-9a47fe24feb7","Type":"ContainerStarted","Data":"acc612b198e2bb735041a74b3bf438f971ed76411bde0e76a3b0dd4d3a920c17"} Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.560934 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hk2fs" event={"ID":"5f1a5ee7-3792-4f35-a967-80fb96c7df10","Type":"ContainerDied","Data":"901147ab356b116732cf4aeab1e9b5ea5ce5785575b20bc7bda82db966cdb603"} Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.560983 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="901147ab356b116732cf4aeab1e9b5ea5ce5785575b20bc7bda82db966cdb603" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.560986 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hk2fs" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.588675 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.58864837 podStartE2EDuration="2.58864837s" podCreationTimestamp="2026-02-19 15:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:31:36.584855923 +0000 UTC m=+1326.066886047" watchObservedRunningTime="2026-02-19 15:31:36.58864837 +0000 UTC m=+1326.070678534" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.628348 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 15:31:36 crc kubenswrapper[4810]: E0219 15:31:36.628804 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1a5ee7-3792-4f35-a967-80fb96c7df10" containerName="nova-cell1-conductor-db-sync" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.628822 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1a5ee7-3792-4f35-a967-80fb96c7df10" containerName="nova-cell1-conductor-db-sync" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.628995 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1a5ee7-3792-4f35-a967-80fb96c7df10" containerName="nova-cell1-conductor-db-sync" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.629658 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.631657 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.638021 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.681172 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f93aa728-7924-4a75-ad48-cc174764cf3e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f93aa728-7924-4a75-ad48-cc174764cf3e\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.681272 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93aa728-7924-4a75-ad48-cc174764cf3e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f93aa728-7924-4a75-ad48-cc174764cf3e\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.681307 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlvj\" (UniqueName: \"kubernetes.io/projected/f93aa728-7924-4a75-ad48-cc174764cf3e-kube-api-access-7rlvj\") pod \"nova-cell1-conductor-0\" (UID: \"f93aa728-7924-4a75-ad48-cc174764cf3e\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.782841 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f93aa728-7924-4a75-ad48-cc174764cf3e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f93aa728-7924-4a75-ad48-cc174764cf3e\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.782907 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93aa728-7924-4a75-ad48-cc174764cf3e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f93aa728-7924-4a75-ad48-cc174764cf3e\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.782927 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlvj\" (UniqueName: \"kubernetes.io/projected/f93aa728-7924-4a75-ad48-cc174764cf3e-kube-api-access-7rlvj\") pod \"nova-cell1-conductor-0\" (UID: \"f93aa728-7924-4a75-ad48-cc174764cf3e\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.791241 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93aa728-7924-4a75-ad48-cc174764cf3e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f93aa728-7924-4a75-ad48-cc174764cf3e\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.801851 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f93aa728-7924-4a75-ad48-cc174764cf3e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f93aa728-7924-4a75-ad48-cc174764cf3e\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.807578 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlvj\" (UniqueName: \"kubernetes.io/projected/f93aa728-7924-4a75-ad48-cc174764cf3e-kube-api-access-7rlvj\") pod \"nova-cell1-conductor-0\" (UID: \"f93aa728-7924-4a75-ad48-cc174764cf3e\") " pod="openstack/nova-cell1-conductor-0" Feb 19 15:31:36 crc kubenswrapper[4810]: I0219 15:31:36.973660 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.313453 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.398374 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8841a6af-789a-4dd9-81ed-3afc45b255e4-combined-ca-bundle\") pod \"8841a6af-789a-4dd9-81ed-3afc45b255e4\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.398445 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-945nq\" (UniqueName: \"kubernetes.io/projected/8841a6af-789a-4dd9-81ed-3afc45b255e4-kube-api-access-945nq\") pod \"8841a6af-789a-4dd9-81ed-3afc45b255e4\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.398488 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8841a6af-789a-4dd9-81ed-3afc45b255e4-config-data\") pod \"8841a6af-789a-4dd9-81ed-3afc45b255e4\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.398527 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8841a6af-789a-4dd9-81ed-3afc45b255e4-logs\") pod \"8841a6af-789a-4dd9-81ed-3afc45b255e4\" (UID: \"8841a6af-789a-4dd9-81ed-3afc45b255e4\") " Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.399509 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8841a6af-789a-4dd9-81ed-3afc45b255e4-logs" (OuterVolumeSpecName: "logs") pod "8841a6af-789a-4dd9-81ed-3afc45b255e4" (UID: "8841a6af-789a-4dd9-81ed-3afc45b255e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.405475 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8841a6af-789a-4dd9-81ed-3afc45b255e4-kube-api-access-945nq" (OuterVolumeSpecName: "kube-api-access-945nq") pod "8841a6af-789a-4dd9-81ed-3afc45b255e4" (UID: "8841a6af-789a-4dd9-81ed-3afc45b255e4"). InnerVolumeSpecName "kube-api-access-945nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.440624 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8841a6af-789a-4dd9-81ed-3afc45b255e4-config-data" (OuterVolumeSpecName: "config-data") pod "8841a6af-789a-4dd9-81ed-3afc45b255e4" (UID: "8841a6af-789a-4dd9-81ed-3afc45b255e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.456127 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8841a6af-789a-4dd9-81ed-3afc45b255e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8841a6af-789a-4dd9-81ed-3afc45b255e4" (UID: "8841a6af-789a-4dd9-81ed-3afc45b255e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.475117 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.502645 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8841a6af-789a-4dd9-81ed-3afc45b255e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.503014 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-945nq\" (UniqueName: \"kubernetes.io/projected/8841a6af-789a-4dd9-81ed-3afc45b255e4-kube-api-access-945nq\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.503106 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8841a6af-789a-4dd9-81ed-3afc45b255e4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.503215 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8841a6af-789a-4dd9-81ed-3afc45b255e4-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.570855 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f93aa728-7924-4a75-ad48-cc174764cf3e","Type":"ContainerStarted","Data":"b6afbb2a6ee072828554f7309688075e596e5f0f978b9e24813ca66e6fdef0b9"} Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.572511 4810 generic.go:334] "Generic (PLEG): container finished" podID="8841a6af-789a-4dd9-81ed-3afc45b255e4" containerID="c77557ffc93a6c8ce5aa2b9530c80c11dbadbea58a331e6c0ce538bfc266ec9b" exitCode=0 Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.572567 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.572581 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8841a6af-789a-4dd9-81ed-3afc45b255e4","Type":"ContainerDied","Data":"c77557ffc93a6c8ce5aa2b9530c80c11dbadbea58a331e6c0ce538bfc266ec9b"} Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.572606 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8841a6af-789a-4dd9-81ed-3afc45b255e4","Type":"ContainerDied","Data":"edb4da08df823fefbaa5fd91c5229d2a05bf23c6e464beaac0148e5443f3fbaf"} Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.572624 4810 scope.go:117] "RemoveContainer" containerID="c77557ffc93a6c8ce5aa2b9530c80c11dbadbea58a331e6c0ce538bfc266ec9b" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.576252 4810 generic.go:334] "Generic (PLEG): container finished" podID="d9e6d16b-a7c2-4a73-866e-6e068e910d82" containerID="6b9c2cfe7570d6ebf3ef1994a77aa29a1ffc94791dbe2f685217ccc99624a14f" exitCode=0 Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.576388 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d9e6d16b-a7c2-4a73-866e-6e068e910d82","Type":"ContainerDied","Data":"6b9c2cfe7570d6ebf3ef1994a77aa29a1ffc94791dbe2f685217ccc99624a14f"} Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.601078 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.609832 4810 scope.go:117] "RemoveContainer" containerID="926de79f3f2ffb9fa64a65d58129228f7afa6cf976838a6de05084ca0e762fe7" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.611399 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.635195 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 15:31:37 crc kubenswrapper[4810]: E0219 15:31:37.635798 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8841a6af-789a-4dd9-81ed-3afc45b255e4" containerName="nova-api-api" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.635816 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8841a6af-789a-4dd9-81ed-3afc45b255e4" containerName="nova-api-api" Feb 19 15:31:37 crc kubenswrapper[4810]: E0219 15:31:37.635842 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8841a6af-789a-4dd9-81ed-3afc45b255e4" containerName="nova-api-log" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.635852 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8841a6af-789a-4dd9-81ed-3afc45b255e4" containerName="nova-api-log" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.636081 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="8841a6af-789a-4dd9-81ed-3afc45b255e4" containerName="nova-api-log" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.636116 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="8841a6af-789a-4dd9-81ed-3afc45b255e4" containerName="nova-api-api" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.637459 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.639737 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.641979 4810 scope.go:117] "RemoveContainer" containerID="c77557ffc93a6c8ce5aa2b9530c80c11dbadbea58a331e6c0ce538bfc266ec9b" Feb 19 15:31:37 crc kubenswrapper[4810]: E0219 15:31:37.642396 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c77557ffc93a6c8ce5aa2b9530c80c11dbadbea58a331e6c0ce538bfc266ec9b\": container with ID starting with c77557ffc93a6c8ce5aa2b9530c80c11dbadbea58a331e6c0ce538bfc266ec9b not found: ID does not exist" containerID="c77557ffc93a6c8ce5aa2b9530c80c11dbadbea58a331e6c0ce538bfc266ec9b" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.642430 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c77557ffc93a6c8ce5aa2b9530c80c11dbadbea58a331e6c0ce538bfc266ec9b"} err="failed to get container status \"c77557ffc93a6c8ce5aa2b9530c80c11dbadbea58a331e6c0ce538bfc266ec9b\": rpc error: code = NotFound desc = could not find container \"c77557ffc93a6c8ce5aa2b9530c80c11dbadbea58a331e6c0ce538bfc266ec9b\": container with ID starting with c77557ffc93a6c8ce5aa2b9530c80c11dbadbea58a331e6c0ce538bfc266ec9b not found: ID does not exist" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.642455 4810 scope.go:117] "RemoveContainer" containerID="926de79f3f2ffb9fa64a65d58129228f7afa6cf976838a6de05084ca0e762fe7" Feb 19 15:31:37 crc kubenswrapper[4810]: E0219 15:31:37.644790 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"926de79f3f2ffb9fa64a65d58129228f7afa6cf976838a6de05084ca0e762fe7\": container with ID starting with 926de79f3f2ffb9fa64a65d58129228f7afa6cf976838a6de05084ca0e762fe7 not found: ID does not exist" containerID="926de79f3f2ffb9fa64a65d58129228f7afa6cf976838a6de05084ca0e762fe7" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.644818 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926de79f3f2ffb9fa64a65d58129228f7afa6cf976838a6de05084ca0e762fe7"} err="failed to get container status \"926de79f3f2ffb9fa64a65d58129228f7afa6cf976838a6de05084ca0e762fe7\": rpc error: code = NotFound desc = could not find container \"926de79f3f2ffb9fa64a65d58129228f7afa6cf976838a6de05084ca0e762fe7\": container with ID starting with 926de79f3f2ffb9fa64a65d58129228f7afa6cf976838a6de05084ca0e762fe7 not found: ID does not exist" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.651840 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.717190 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4be167-d56e-491f-851c-c21f30f63112-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.717799 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v46gz\" (UniqueName: \"kubernetes.io/projected/db4be167-d56e-491f-851c-c21f30f63112-kube-api-access-v46gz\") pod \"nova-api-0\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.717837 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4be167-d56e-491f-851c-c21f30f63112-logs\") pod \"nova-api-0\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.717923 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4be167-d56e-491f-851c-c21f30f63112-config-data\") pod \"nova-api-0\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.756066 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.818711 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g5cs\" (UniqueName: \"kubernetes.io/projected/d9e6d16b-a7c2-4a73-866e-6e068e910d82-kube-api-access-7g5cs\") pod \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\" (UID: \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\") " Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.818811 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e6d16b-a7c2-4a73-866e-6e068e910d82-combined-ca-bundle\") pod \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\" (UID: \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\") " Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.819024 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e6d16b-a7c2-4a73-866e-6e068e910d82-config-data\") pod \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\" (UID: \"d9e6d16b-a7c2-4a73-866e-6e068e910d82\") " Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.819405 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v46gz\" (UniqueName: \"kubernetes.io/projected/db4be167-d56e-491f-851c-c21f30f63112-kube-api-access-v46gz\") pod \"nova-api-0\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.819453 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4be167-d56e-491f-851c-c21f30f63112-logs\") pod \"nova-api-0\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.819525 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4be167-d56e-491f-851c-c21f30f63112-config-data\") pod \"nova-api-0\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.819609 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4be167-d56e-491f-851c-c21f30f63112-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.821526 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4be167-d56e-491f-851c-c21f30f63112-logs\") pod \"nova-api-0\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.825971 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4be167-d56e-491f-851c-c21f30f63112-config-data\") pod \"nova-api-0\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.826202 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4be167-d56e-491f-851c-c21f30f63112-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.826522 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9e6d16b-a7c2-4a73-866e-6e068e910d82-kube-api-access-7g5cs" (OuterVolumeSpecName: "kube-api-access-7g5cs") pod "d9e6d16b-a7c2-4a73-866e-6e068e910d82" (UID: "d9e6d16b-a7c2-4a73-866e-6e068e910d82"). InnerVolumeSpecName "kube-api-access-7g5cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.837285 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v46gz\" (UniqueName: \"kubernetes.io/projected/db4be167-d56e-491f-851c-c21f30f63112-kube-api-access-v46gz\") pod \"nova-api-0\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " pod="openstack/nova-api-0" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.849255 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e6d16b-a7c2-4a73-866e-6e068e910d82-config-data" (OuterVolumeSpecName: "config-data") pod "d9e6d16b-a7c2-4a73-866e-6e068e910d82" (UID: "d9e6d16b-a7c2-4a73-866e-6e068e910d82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.849884 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e6d16b-a7c2-4a73-866e-6e068e910d82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9e6d16b-a7c2-4a73-866e-6e068e910d82" (UID: "d9e6d16b-a7c2-4a73-866e-6e068e910d82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.920773 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e6d16b-a7c2-4a73-866e-6e068e910d82-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.920833 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g5cs\" (UniqueName: \"kubernetes.io/projected/d9e6d16b-a7c2-4a73-866e-6e068e910d82-kube-api-access-7g5cs\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.920843 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e6d16b-a7c2-4a73-866e-6e068e910d82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:37 crc kubenswrapper[4810]: I0219 15:31:37.956467 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.427039 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.591604 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f93aa728-7924-4a75-ad48-cc174764cf3e","Type":"ContainerStarted","Data":"d781fd3f88e4222668b7ea2ba1070d00a14ad4ff9ae94d33e848586b5dfa2f84"} Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.591760 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.598543 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db4be167-d56e-491f-851c-c21f30f63112","Type":"ContainerStarted","Data":"6b9772e1210b523a5eae56618cf676698a333f36d8eea10d60634c0ac175381c"} Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.601342 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d9e6d16b-a7c2-4a73-866e-6e068e910d82","Type":"ContainerDied","Data":"8358d2e845bdbf47646ae6969a74848954fb0b3cc77ce5979954030d2a57fe33"} Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.601396 4810 scope.go:117] "RemoveContainer" containerID="6b9c2cfe7570d6ebf3ef1994a77aa29a1ffc94791dbe2f685217ccc99624a14f" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.601409 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.615162 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.615128764 podStartE2EDuration="2.615128764s" podCreationTimestamp="2026-02-19 15:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:31:38.613563194 +0000 UTC m=+1328.095593428" watchObservedRunningTime="2026-02-19 15:31:38.615128764 +0000 UTC m=+1328.097158888" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.669437 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.704392 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.715544 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:31:38 crc kubenswrapper[4810]: E0219 15:31:38.716187 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9e6d16b-a7c2-4a73-866e-6e068e910d82" containerName="nova-scheduler-scheduler" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.716271 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9e6d16b-a7c2-4a73-866e-6e068e910d82" containerName="nova-scheduler-scheduler" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.716526 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9e6d16b-a7c2-4a73-866e-6e068e910d82" containerName="nova-scheduler-scheduler" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.717275 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.721075 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.736932 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.771863 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9776a876-19db-446c-a7bf-d6fe0111d7b8-config-data\") pod \"nova-scheduler-0\" (UID: \"9776a876-19db-446c-a7bf-d6fe0111d7b8\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.771900 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9776a876-19db-446c-a7bf-d6fe0111d7b8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9776a876-19db-446c-a7bf-d6fe0111d7b8\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.772046 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4rpb\" (UniqueName: \"kubernetes.io/projected/9776a876-19db-446c-a7bf-d6fe0111d7b8-kube-api-access-f4rpb\") pod \"nova-scheduler-0\" (UID: \"9776a876-19db-446c-a7bf-d6fe0111d7b8\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.874222 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9776a876-19db-446c-a7bf-d6fe0111d7b8-config-data\") pod \"nova-scheduler-0\" (UID: \"9776a876-19db-446c-a7bf-d6fe0111d7b8\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.874291 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9776a876-19db-446c-a7bf-d6fe0111d7b8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9776a876-19db-446c-a7bf-d6fe0111d7b8\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.874489 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4rpb\" (UniqueName: \"kubernetes.io/projected/9776a876-19db-446c-a7bf-d6fe0111d7b8-kube-api-access-f4rpb\") pod \"nova-scheduler-0\" (UID: \"9776a876-19db-446c-a7bf-d6fe0111d7b8\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.878732 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9776a876-19db-446c-a7bf-d6fe0111d7b8-config-data\") pod \"nova-scheduler-0\" (UID: \"9776a876-19db-446c-a7bf-d6fe0111d7b8\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.879651 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9776a876-19db-446c-a7bf-d6fe0111d7b8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9776a876-19db-446c-a7bf-d6fe0111d7b8\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:38 crc kubenswrapper[4810]: I0219 15:31:38.895366 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4rpb\" (UniqueName: \"kubernetes.io/projected/9776a876-19db-446c-a7bf-d6fe0111d7b8-kube-api-access-f4rpb\") pod \"nova-scheduler-0\" (UID: \"9776a876-19db-446c-a7bf-d6fe0111d7b8\") " pod="openstack/nova-scheduler-0" Feb 19 15:31:39 crc kubenswrapper[4810]: I0219 15:31:39.195085 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:31:39 crc kubenswrapper[4810]: I0219 15:31:39.452307 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8841a6af-789a-4dd9-81ed-3afc45b255e4" path="/var/lib/kubelet/pods/8841a6af-789a-4dd9-81ed-3afc45b255e4/volumes" Feb 19 15:31:39 crc kubenswrapper[4810]: I0219 15:31:39.453121 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9e6d16b-a7c2-4a73-866e-6e068e910d82" path="/var/lib/kubelet/pods/d9e6d16b-a7c2-4a73-866e-6e068e910d82/volumes" Feb 19 15:31:39 crc kubenswrapper[4810]: I0219 15:31:39.614907 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db4be167-d56e-491f-851c-c21f30f63112","Type":"ContainerStarted","Data":"c92ab671905f1460e2a7dccb331259f6a59c52684a2e0df2a88b275d79bf21c4"} Feb 19 15:31:39 crc kubenswrapper[4810]: I0219 15:31:39.614947 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db4be167-d56e-491f-851c-c21f30f63112","Type":"ContainerStarted","Data":"f3afc915b44f3696b28ac2a3768f5a557a2b647fe608482478f7471cf5736c24"} Feb 19 15:31:39 crc kubenswrapper[4810]: I0219 15:31:39.647403 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:31:39 crc kubenswrapper[4810]: I0219 15:31:39.653870 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.653857323 podStartE2EDuration="2.653857323s" podCreationTimestamp="2026-02-19 15:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:31:39.64113708 +0000 UTC m=+1329.123167204" watchObservedRunningTime="2026-02-19 15:31:39.653857323 +0000 UTC m=+1329.135887447" Feb 19 15:31:39 crc kubenswrapper[4810]: I0219 15:31:39.934804 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 15:31:39 crc kubenswrapper[4810]: I0219 15:31:39.935177 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 15:31:40 crc kubenswrapper[4810]: I0219 15:31:40.623270 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9776a876-19db-446c-a7bf-d6fe0111d7b8","Type":"ContainerStarted","Data":"9050cf4664528bc43d0a9883f3bb240ba2cdcad9a8907febfcebd4f66322d6a0"} Feb 19 15:31:40 crc kubenswrapper[4810]: I0219 15:31:40.623317 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9776a876-19db-446c-a7bf-d6fe0111d7b8","Type":"ContainerStarted","Data":"e26af5d090a2b6422428d4303d703f7d22f7771910d627d6bbd17faed1d8906a"} Feb 19 15:31:40 crc kubenswrapper[4810]: I0219 15:31:40.651350 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.651310574 podStartE2EDuration="2.651310574s" podCreationTimestamp="2026-02-19 15:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:31:40.645307581 +0000 UTC m=+1330.127337705" watchObservedRunningTime="2026-02-19 15:31:40.651310574 +0000 UTC m=+1330.133340698" Feb 19 15:31:40 crc kubenswrapper[4810]: I0219 15:31:40.728833 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 15:31:44 crc kubenswrapper[4810]: I0219 15:31:44.195939 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 15:31:44 crc kubenswrapper[4810]: I0219 15:31:44.389712 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 15:31:44 crc kubenswrapper[4810]: I0219 15:31:44.389925 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="097bc4d1-5648-4607-9c49-286e4bbbe553" containerName="kube-state-metrics" containerID="cri-o://35654a2e76fc1f65be05a171e2aeec58c5e73e3b78c5850da9115db247aae94f" gracePeriod=30 Feb 19 15:31:44 crc kubenswrapper[4810]: I0219 15:31:44.678682 4810 generic.go:334] "Generic (PLEG): container finished" podID="097bc4d1-5648-4607-9c49-286e4bbbe553" containerID="35654a2e76fc1f65be05a171e2aeec58c5e73e3b78c5850da9115db247aae94f" exitCode=2 Feb 19 15:31:44 crc kubenswrapper[4810]: I0219 15:31:44.678720 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"097bc4d1-5648-4607-9c49-286e4bbbe553","Type":"ContainerDied","Data":"35654a2e76fc1f65be05a171e2aeec58c5e73e3b78c5850da9115db247aae94f"} Feb 19 15:31:44 crc kubenswrapper[4810]: I0219 15:31:44.900746 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 15:31:44 crc kubenswrapper[4810]: I0219 15:31:44.934428 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 15:31:44 crc kubenswrapper[4810]: I0219 15:31:44.934481 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.011419 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdrqf\" (UniqueName: \"kubernetes.io/projected/097bc4d1-5648-4607-9c49-286e4bbbe553-kube-api-access-mdrqf\") pod \"097bc4d1-5648-4607-9c49-286e4bbbe553\" (UID: \"097bc4d1-5648-4607-9c49-286e4bbbe553\") " Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.018444 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/097bc4d1-5648-4607-9c49-286e4bbbe553-kube-api-access-mdrqf" (OuterVolumeSpecName: "kube-api-access-mdrqf") pod "097bc4d1-5648-4607-9c49-286e4bbbe553" (UID: "097bc4d1-5648-4607-9c49-286e4bbbe553"). InnerVolumeSpecName "kube-api-access-mdrqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.113866 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdrqf\" (UniqueName: \"kubernetes.io/projected/097bc4d1-5648-4607-9c49-286e4bbbe553-kube-api-access-mdrqf\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.689098 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"097bc4d1-5648-4607-9c49-286e4bbbe553","Type":"ContainerDied","Data":"7d6bd84bead9eb4536dd357117c11ae4d96e35d9a18c032d07c64f477200a6eb"} Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.689153 4810 scope.go:117] "RemoveContainer" containerID="35654a2e76fc1f65be05a171e2aeec58c5e73e3b78c5850da9115db247aae94f" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.689303 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.708748 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.724488 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.740024 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 15:31:45 crc kubenswrapper[4810]: E0219 15:31:45.740904 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097bc4d1-5648-4607-9c49-286e4bbbe553" containerName="kube-state-metrics" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.740932 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="097bc4d1-5648-4607-9c49-286e4bbbe553" containerName="kube-state-metrics" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.741270 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="097bc4d1-5648-4607-9c49-286e4bbbe553" containerName="kube-state-metrics" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.742528 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.744832 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.746244 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.750868 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.827200 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358dbee-2e5b-432d-98e0-6945d2e0d44b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9358dbee-2e5b-432d-98e0-6945d2e0d44b\") " pod="openstack/kube-state-metrics-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.827244 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqzx2\" (UniqueName: \"kubernetes.io/projected/9358dbee-2e5b-432d-98e0-6945d2e0d44b-kube-api-access-gqzx2\") pod \"kube-state-metrics-0\" (UID: \"9358dbee-2e5b-432d-98e0-6945d2e0d44b\") " pod="openstack/kube-state-metrics-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.827343 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9358dbee-2e5b-432d-98e0-6945d2e0d44b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9358dbee-2e5b-432d-98e0-6945d2e0d44b\") " pod="openstack/kube-state-metrics-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.827373 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358dbee-2e5b-432d-98e0-6945d2e0d44b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9358dbee-2e5b-432d-98e0-6945d2e0d44b\") " pod="openstack/kube-state-metrics-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.929485 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358dbee-2e5b-432d-98e0-6945d2e0d44b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9358dbee-2e5b-432d-98e0-6945d2e0d44b\") " pod="openstack/kube-state-metrics-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.929874 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqzx2\" (UniqueName: \"kubernetes.io/projected/9358dbee-2e5b-432d-98e0-6945d2e0d44b-kube-api-access-gqzx2\") pod \"kube-state-metrics-0\" (UID: \"9358dbee-2e5b-432d-98e0-6945d2e0d44b\") " pod="openstack/kube-state-metrics-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.930006 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9358dbee-2e5b-432d-98e0-6945d2e0d44b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9358dbee-2e5b-432d-98e0-6945d2e0d44b\") " pod="openstack/kube-state-metrics-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.930035 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358dbee-2e5b-432d-98e0-6945d2e0d44b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9358dbee-2e5b-432d-98e0-6945d2e0d44b\") " pod="openstack/kube-state-metrics-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.935937 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358dbee-2e5b-432d-98e0-6945d2e0d44b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9358dbee-2e5b-432d-98e0-6945d2e0d44b\") " pod="openstack/kube-state-metrics-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.936085 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358dbee-2e5b-432d-98e0-6945d2e0d44b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9358dbee-2e5b-432d-98e0-6945d2e0d44b\") " pod="openstack/kube-state-metrics-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.945951 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9358dbee-2e5b-432d-98e0-6945d2e0d44b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9358dbee-2e5b-432d-98e0-6945d2e0d44b\") " pod="openstack/kube-state-metrics-0" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.946484 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="deba4978-0921-4c62-9452-9a47fe24feb7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.946446 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="deba4978-0921-4c62-9452-9a47fe24feb7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 15:31:45 crc kubenswrapper[4810]: I0219 15:31:45.948671 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqzx2\" (UniqueName: \"kubernetes.io/projected/9358dbee-2e5b-432d-98e0-6945d2e0d44b-kube-api-access-gqzx2\") pod \"kube-state-metrics-0\" (UID: \"9358dbee-2e5b-432d-98e0-6945d2e0d44b\") " pod="openstack/kube-state-metrics-0" Feb 19 15:31:46 crc kubenswrapper[4810]: I0219 15:31:46.058593 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 15:31:46 crc kubenswrapper[4810]: I0219 15:31:46.540702 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:46 crc kubenswrapper[4810]: I0219 15:31:46.541229 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="sg-core" containerID="cri-o://4925cc21f9c39a46d677b62b982c7df70c5e45eb8c42ab96787c52123262f91d" gracePeriod=30 Feb 19 15:31:46 crc kubenswrapper[4810]: I0219 15:31:46.541229 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="ceilometer-notification-agent" containerID="cri-o://308ad8f8b8e7c54aece6c4e1cc25e71fd1252432f00bfbdb0df4940372379f98" gracePeriod=30 Feb 19 15:31:46 crc kubenswrapper[4810]: I0219 15:31:46.541229 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="proxy-httpd" containerID="cri-o://2adc7f5bee46c61926054ab4e65ab8e1825b109d167cc80842a20a1750f29ed6" gracePeriod=30 Feb 19 15:31:46 crc kubenswrapper[4810]: I0219 15:31:46.541092 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="ceilometer-central-agent" containerID="cri-o://6cec0822ad581a03d19d641977e9a7d547ef54d70d04ff9e2ccd6f292e9245c7" gracePeriod=30 Feb 19 15:31:46 crc kubenswrapper[4810]: W0219 15:31:46.580137 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9358dbee_2e5b_432d_98e0_6945d2e0d44b.slice/crio-f33881054638ff16625cc87cbd7f7d7fde7f7a6632ba65976becde1e415786d5 WatchSource:0}: Error finding container f33881054638ff16625cc87cbd7f7d7fde7f7a6632ba65976becde1e415786d5: Status 404 returned error can't find the container with id f33881054638ff16625cc87cbd7f7d7fde7f7a6632ba65976becde1e415786d5 Feb 19 15:31:46 crc kubenswrapper[4810]: I0219 15:31:46.595070 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 15:31:46 crc kubenswrapper[4810]: I0219 15:31:46.699223 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9358dbee-2e5b-432d-98e0-6945d2e0d44b","Type":"ContainerStarted","Data":"f33881054638ff16625cc87cbd7f7d7fde7f7a6632ba65976becde1e415786d5"} Feb 19 15:31:46 crc kubenswrapper[4810]: I0219 15:31:46.706215 4810 generic.go:334] "Generic (PLEG): container finished" podID="af3ec395-6313-4094-9597-b52da27a0d7e" containerID="2adc7f5bee46c61926054ab4e65ab8e1825b109d167cc80842a20a1750f29ed6" exitCode=0 Feb 19 15:31:46 crc kubenswrapper[4810]: I0219 15:31:46.706242 4810 generic.go:334] "Generic (PLEG): container finished" podID="af3ec395-6313-4094-9597-b52da27a0d7e" containerID="4925cc21f9c39a46d677b62b982c7df70c5e45eb8c42ab96787c52123262f91d" exitCode=2 Feb 19 15:31:46 crc kubenswrapper[4810]: I0219 15:31:46.706261 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af3ec395-6313-4094-9597-b52da27a0d7e","Type":"ContainerDied","Data":"2adc7f5bee46c61926054ab4e65ab8e1825b109d167cc80842a20a1750f29ed6"} Feb 19 15:31:46 crc kubenswrapper[4810]: I0219 15:31:46.706283 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af3ec395-6313-4094-9597-b52da27a0d7e","Type":"ContainerDied","Data":"4925cc21f9c39a46d677b62b982c7df70c5e45eb8c42ab96787c52123262f91d"} Feb 19 15:31:47 crc kubenswrapper[4810]: I0219 15:31:47.044103 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 15:31:47 crc kubenswrapper[4810]: I0219 15:31:47.450465 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="097bc4d1-5648-4607-9c49-286e4bbbe553" path="/var/lib/kubelet/pods/097bc4d1-5648-4607-9c49-286e4bbbe553/volumes" Feb 19 15:31:47 crc kubenswrapper[4810]: I0219 15:31:47.718748 4810 generic.go:334] "Generic (PLEG): container finished" podID="af3ec395-6313-4094-9597-b52da27a0d7e" containerID="6cec0822ad581a03d19d641977e9a7d547ef54d70d04ff9e2ccd6f292e9245c7" exitCode=0 Feb 19 15:31:47 crc kubenswrapper[4810]: I0219 15:31:47.718827 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af3ec395-6313-4094-9597-b52da27a0d7e","Type":"ContainerDied","Data":"6cec0822ad581a03d19d641977e9a7d547ef54d70d04ff9e2ccd6f292e9245c7"} Feb 19 15:31:47 crc kubenswrapper[4810]: I0219 15:31:47.721475 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9358dbee-2e5b-432d-98e0-6945d2e0d44b","Type":"ContainerStarted","Data":"9cc90f1afe6da59c95622d76dbac631c1a19726a0bc5454289544aee0a783fc2"} Feb 19 15:31:47 crc kubenswrapper[4810]: I0219 15:31:47.721625 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 15:31:47 crc kubenswrapper[4810]: I0219 15:31:47.743483 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.167833396 podStartE2EDuration="2.743458652s" podCreationTimestamp="2026-02-19 15:31:45 +0000 UTC" firstStartedPulling="2026-02-19 15:31:46.582279287 +0000 UTC m=+1336.064309411" lastFinishedPulling="2026-02-19 15:31:47.157904533 +0000 UTC m=+1336.639934667" observedRunningTime="2026-02-19 15:31:47.736053424 +0000 UTC m=+1337.218083548" watchObservedRunningTime="2026-02-19 15:31:47.743458652 +0000 UTC m=+1337.225488776" Feb 19 15:31:47 crc kubenswrapper[4810]: I0219 15:31:47.957125 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 15:31:47 crc kubenswrapper[4810]: I0219 15:31:47.957168 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 15:31:49 crc kubenswrapper[4810]: I0219 15:31:49.040669 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="db4be167-d56e-491f-851c-c21f30f63112" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 15:31:49 crc kubenswrapper[4810]: I0219 15:31:49.040688 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="db4be167-d56e-491f-851c-c21f30f63112" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 15:31:49 crc kubenswrapper[4810]: I0219 15:31:49.196004 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 15:31:49 crc kubenswrapper[4810]: I0219 15:31:49.231687 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 15:31:49 crc kubenswrapper[4810]: I0219 15:31:49.537175 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:31:49 crc kubenswrapper[4810]: I0219 15:31:49.537505 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:31:49 crc kubenswrapper[4810]: I0219 15:31:49.770826 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.224715 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.291387 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af3ec395-6313-4094-9597-b52da27a0d7e-run-httpd\") pod \"af3ec395-6313-4094-9597-b52da27a0d7e\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.291675 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npk42\" (UniqueName: \"kubernetes.io/projected/af3ec395-6313-4094-9597-b52da27a0d7e-kube-api-access-npk42\") pod \"af3ec395-6313-4094-9597-b52da27a0d7e\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.291802 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af3ec395-6313-4094-9597-b52da27a0d7e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "af3ec395-6313-4094-9597-b52da27a0d7e" (UID: "af3ec395-6313-4094-9597-b52da27a0d7e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.291899 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af3ec395-6313-4094-9597-b52da27a0d7e-log-httpd\") pod \"af3ec395-6313-4094-9597-b52da27a0d7e\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.292013 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-sg-core-conf-yaml\") pod \"af3ec395-6313-4094-9597-b52da27a0d7e\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.292098 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-scripts\") pod \"af3ec395-6313-4094-9597-b52da27a0d7e\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.292250 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-config-data\") pod \"af3ec395-6313-4094-9597-b52da27a0d7e\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.292572 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-combined-ca-bundle\") pod \"af3ec395-6313-4094-9597-b52da27a0d7e\" (UID: \"af3ec395-6313-4094-9597-b52da27a0d7e\") " Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.293354 4810 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af3ec395-6313-4094-9597-b52da27a0d7e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.292145 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af3ec395-6313-4094-9597-b52da27a0d7e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "af3ec395-6313-4094-9597-b52da27a0d7e" (UID: "af3ec395-6313-4094-9597-b52da27a0d7e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.303269 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af3ec395-6313-4094-9597-b52da27a0d7e-kube-api-access-npk42" (OuterVolumeSpecName: "kube-api-access-npk42") pod "af3ec395-6313-4094-9597-b52da27a0d7e" (UID: "af3ec395-6313-4094-9597-b52da27a0d7e"). InnerVolumeSpecName "kube-api-access-npk42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.305161 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-scripts" (OuterVolumeSpecName: "scripts") pod "af3ec395-6313-4094-9597-b52da27a0d7e" (UID: "af3ec395-6313-4094-9597-b52da27a0d7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.340664 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "af3ec395-6313-4094-9597-b52da27a0d7e" (UID: "af3ec395-6313-4094-9597-b52da27a0d7e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.396020 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npk42\" (UniqueName: \"kubernetes.io/projected/af3ec395-6313-4094-9597-b52da27a0d7e-kube-api-access-npk42\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.396268 4810 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af3ec395-6313-4094-9597-b52da27a0d7e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.396374 4810 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.396474 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.408158 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af3ec395-6313-4094-9597-b52da27a0d7e" (UID: "af3ec395-6313-4094-9597-b52da27a0d7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.429663 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-config-data" (OuterVolumeSpecName: "config-data") pod "af3ec395-6313-4094-9597-b52da27a0d7e" (UID: "af3ec395-6313-4094-9597-b52da27a0d7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.498086 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.498686 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af3ec395-6313-4094-9597-b52da27a0d7e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.797543 4810 generic.go:334] "Generic (PLEG): container finished" podID="af3ec395-6313-4094-9597-b52da27a0d7e" containerID="308ad8f8b8e7c54aece6c4e1cc25e71fd1252432f00bfbdb0df4940372379f98" exitCode=0 Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.797588 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af3ec395-6313-4094-9597-b52da27a0d7e","Type":"ContainerDied","Data":"308ad8f8b8e7c54aece6c4e1cc25e71fd1252432f00bfbdb0df4940372379f98"} Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.797682 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.798235 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af3ec395-6313-4094-9597-b52da27a0d7e","Type":"ContainerDied","Data":"62ba2b7fb62123b253e68adfb6db44d11bb6e2d13e45773cd9a27fa6ec28a020"} Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.798264 4810 scope.go:117] "RemoveContainer" containerID="2adc7f5bee46c61926054ab4e65ab8e1825b109d167cc80842a20a1750f29ed6" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.826103 4810 scope.go:117] "RemoveContainer" containerID="4925cc21f9c39a46d677b62b982c7df70c5e45eb8c42ab96787c52123262f91d" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.844920 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.862805 4810 scope.go:117] "RemoveContainer" containerID="308ad8f8b8e7c54aece6c4e1cc25e71fd1252432f00bfbdb0df4940372379f98" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.865987 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.884298 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:54 crc kubenswrapper[4810]: E0219 15:31:54.884757 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="sg-core" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.884773 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="sg-core" Feb 19 15:31:54 crc kubenswrapper[4810]: E0219 15:31:54.884796 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="ceilometer-notification-agent" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.884805 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="ceilometer-notification-agent" Feb 19 15:31:54 crc kubenswrapper[4810]: E0219 15:31:54.884827 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="ceilometer-central-agent" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.884835 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="ceilometer-central-agent" Feb 19 15:31:54 crc kubenswrapper[4810]: E0219 15:31:54.884856 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="proxy-httpd" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.884862 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="proxy-httpd" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.885080 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="ceilometer-notification-agent" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.885094 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="sg-core" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.885118 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="proxy-httpd" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.885128 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" containerName="ceilometer-central-agent" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.887035 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.892847 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.893014 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.893044 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.893879 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.906607 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.906692 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.906732 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27x4n\" (UniqueName: \"kubernetes.io/projected/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-kube-api-access-27x4n\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.906774 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-config-data\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.906795 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.906846 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-log-httpd\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.906873 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-scripts\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.906927 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-run-httpd\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.941723 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.947351 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.954140 4810 scope.go:117] "RemoveContainer" containerID="6cec0822ad581a03d19d641977e9a7d547ef54d70d04ff9e2ccd6f292e9245c7" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.957010 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.985638 4810 scope.go:117] "RemoveContainer" containerID="2adc7f5bee46c61926054ab4e65ab8e1825b109d167cc80842a20a1750f29ed6" Feb 19 15:31:54 crc kubenswrapper[4810]: E0219 15:31:54.996807 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2adc7f5bee46c61926054ab4e65ab8e1825b109d167cc80842a20a1750f29ed6\": container with ID starting with 2adc7f5bee46c61926054ab4e65ab8e1825b109d167cc80842a20a1750f29ed6 not found: ID does not exist" containerID="2adc7f5bee46c61926054ab4e65ab8e1825b109d167cc80842a20a1750f29ed6" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.996853 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2adc7f5bee46c61926054ab4e65ab8e1825b109d167cc80842a20a1750f29ed6"} err="failed to get container status \"2adc7f5bee46c61926054ab4e65ab8e1825b109d167cc80842a20a1750f29ed6\": rpc error: code = NotFound desc = could not find container \"2adc7f5bee46c61926054ab4e65ab8e1825b109d167cc80842a20a1750f29ed6\": container with ID starting with 2adc7f5bee46c61926054ab4e65ab8e1825b109d167cc80842a20a1750f29ed6 not found: ID does not exist" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.996877 4810 scope.go:117] "RemoveContainer" containerID="4925cc21f9c39a46d677b62b982c7df70c5e45eb8c42ab96787c52123262f91d" Feb 19 15:31:54 crc kubenswrapper[4810]: E0219 15:31:54.998034 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4925cc21f9c39a46d677b62b982c7df70c5e45eb8c42ab96787c52123262f91d\": container with ID starting with 4925cc21f9c39a46d677b62b982c7df70c5e45eb8c42ab96787c52123262f91d not found: ID does not exist" containerID="4925cc21f9c39a46d677b62b982c7df70c5e45eb8c42ab96787c52123262f91d" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.998069 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4925cc21f9c39a46d677b62b982c7df70c5e45eb8c42ab96787c52123262f91d"} err="failed to get container status \"4925cc21f9c39a46d677b62b982c7df70c5e45eb8c42ab96787c52123262f91d\": rpc error: code = NotFound desc = could not find container \"4925cc21f9c39a46d677b62b982c7df70c5e45eb8c42ab96787c52123262f91d\": container with ID starting with 4925cc21f9c39a46d677b62b982c7df70c5e45eb8c42ab96787c52123262f91d not found: ID does not exist" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.998085 4810 scope.go:117] "RemoveContainer" containerID="308ad8f8b8e7c54aece6c4e1cc25e71fd1252432f00bfbdb0df4940372379f98" Feb 19 15:31:54 crc kubenswrapper[4810]: E0219 15:31:54.998530 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"308ad8f8b8e7c54aece6c4e1cc25e71fd1252432f00bfbdb0df4940372379f98\": container with ID starting with 308ad8f8b8e7c54aece6c4e1cc25e71fd1252432f00bfbdb0df4940372379f98 not found: ID does not exist" containerID="308ad8f8b8e7c54aece6c4e1cc25e71fd1252432f00bfbdb0df4940372379f98" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.998572 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308ad8f8b8e7c54aece6c4e1cc25e71fd1252432f00bfbdb0df4940372379f98"} err="failed to get container status \"308ad8f8b8e7c54aece6c4e1cc25e71fd1252432f00bfbdb0df4940372379f98\": rpc error: code = NotFound desc = could not find container \"308ad8f8b8e7c54aece6c4e1cc25e71fd1252432f00bfbdb0df4940372379f98\": container with ID starting with 308ad8f8b8e7c54aece6c4e1cc25e71fd1252432f00bfbdb0df4940372379f98 not found: ID does not exist" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.998600 4810 scope.go:117] "RemoveContainer" containerID="6cec0822ad581a03d19d641977e9a7d547ef54d70d04ff9e2ccd6f292e9245c7" Feb 19 15:31:54 crc kubenswrapper[4810]: E0219 15:31:54.998867 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cec0822ad581a03d19d641977e9a7d547ef54d70d04ff9e2ccd6f292e9245c7\": container with ID starting with 6cec0822ad581a03d19d641977e9a7d547ef54d70d04ff9e2ccd6f292e9245c7 not found: ID does not exist" containerID="6cec0822ad581a03d19d641977e9a7d547ef54d70d04ff9e2ccd6f292e9245c7" Feb 19 15:31:54 crc kubenswrapper[4810]: I0219 15:31:54.998890 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cec0822ad581a03d19d641977e9a7d547ef54d70d04ff9e2ccd6f292e9245c7"} err="failed to get container status \"6cec0822ad581a03d19d641977e9a7d547ef54d70d04ff9e2ccd6f292e9245c7\": rpc error: code = NotFound desc = could not find container \"6cec0822ad581a03d19d641977e9a7d547ef54d70d04ff9e2ccd6f292e9245c7\": container with ID starting with 6cec0822ad581a03d19d641977e9a7d547ef54d70d04ff9e2ccd6f292e9245c7 not found: ID does not exist" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.008665 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-scripts\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.008735 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-run-httpd\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.008796 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.008856 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.008884 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27x4n\" (UniqueName: \"kubernetes.io/projected/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-kube-api-access-27x4n\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.008940 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.008963 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-config-data\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.009029 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-log-httpd\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.009219 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-run-httpd\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.009505 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-log-httpd\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.014001 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.015200 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-scripts\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.015819 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.016146 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.017053 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-config-data\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.024517 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27x4n\" (UniqueName: \"kubernetes.io/projected/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-kube-api-access-27x4n\") pod \"ceilometer-0\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.243563 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.464088 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af3ec395-6313-4094-9597-b52da27a0d7e" path="/var/lib/kubelet/pods/af3ec395-6313-4094-9597-b52da27a0d7e/volumes" Feb 19 15:31:55 crc kubenswrapper[4810]: W0219 15:31:55.739499 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2594f4d_4c2f_4fc1_bda2_98a148e09b20.slice/crio-c1f4a9d4ae39c71c4f7ab55984edc9f42ceec7ca748ce3c8a863c795e3882a8e WatchSource:0}: Error finding container c1f4a9d4ae39c71c4f7ab55984edc9f42ceec7ca748ce3c8a863c795e3882a8e: Status 404 returned error can't find the container with id c1f4a9d4ae39c71c4f7ab55984edc9f42ceec7ca748ce3c8a863c795e3882a8e Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.739508 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.823115 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2594f4d-4c2f-4fc1-bda2-98a148e09b20","Type":"ContainerStarted","Data":"c1f4a9d4ae39c71c4f7ab55984edc9f42ceec7ca748ce3c8a863c795e3882a8e"} Feb 19 15:31:55 crc kubenswrapper[4810]: I0219 15:31:55.833254 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.075560 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.655133 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.755114 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c8d587-b429-415f-96f6-628924fed084-config-data\") pod \"b4c8d587-b429-415f-96f6-628924fed084\" (UID: \"b4c8d587-b429-415f-96f6-628924fed084\") " Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.755551 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkj9r\" (UniqueName: \"kubernetes.io/projected/b4c8d587-b429-415f-96f6-628924fed084-kube-api-access-qkj9r\") pod \"b4c8d587-b429-415f-96f6-628924fed084\" (UID: \"b4c8d587-b429-415f-96f6-628924fed084\") " Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.755636 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c8d587-b429-415f-96f6-628924fed084-combined-ca-bundle\") pod \"b4c8d587-b429-415f-96f6-628924fed084\" (UID: \"b4c8d587-b429-415f-96f6-628924fed084\") " Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.762107 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c8d587-b429-415f-96f6-628924fed084-kube-api-access-qkj9r" (OuterVolumeSpecName: "kube-api-access-qkj9r") pod "b4c8d587-b429-415f-96f6-628924fed084" (UID: "b4c8d587-b429-415f-96f6-628924fed084"). InnerVolumeSpecName "kube-api-access-qkj9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.785580 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c8d587-b429-415f-96f6-628924fed084-config-data" (OuterVolumeSpecName: "config-data") pod "b4c8d587-b429-415f-96f6-628924fed084" (UID: "b4c8d587-b429-415f-96f6-628924fed084"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.795108 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c8d587-b429-415f-96f6-628924fed084-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4c8d587-b429-415f-96f6-628924fed084" (UID: "b4c8d587-b429-415f-96f6-628924fed084"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.835401 4810 generic.go:334] "Generic (PLEG): container finished" podID="b4c8d587-b429-415f-96f6-628924fed084" containerID="b7e5ea663f35522cf345ee4992587f93bc18717e336dd09c7687290a99a11aa3" exitCode=137 Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.835450 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.835495 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b4c8d587-b429-415f-96f6-628924fed084","Type":"ContainerDied","Data":"b7e5ea663f35522cf345ee4992587f93bc18717e336dd09c7687290a99a11aa3"} Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.835528 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b4c8d587-b429-415f-96f6-628924fed084","Type":"ContainerDied","Data":"4774e9328b610c917f9fd35141fcadf466b7543107fa862abe860e7744c56cb8"} Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.835563 4810 scope.go:117] "RemoveContainer" containerID="b7e5ea663f35522cf345ee4992587f93bc18717e336dd09c7687290a99a11aa3" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.838167 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2594f4d-4c2f-4fc1-bda2-98a148e09b20","Type":"ContainerStarted","Data":"d89267d6dac559d2a521953fb65466a02a167ef0f42f9337d1bc8f64f8e801a5"} Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.838256 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2594f4d-4c2f-4fc1-bda2-98a148e09b20","Type":"ContainerStarted","Data":"b146e7e942b25643f6a8f23e7901f7c71009b2241073e62495a7a1015b18112c"} Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.857781 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c8d587-b429-415f-96f6-628924fed084-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.857820 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkj9r\" (UniqueName: \"kubernetes.io/projected/b4c8d587-b429-415f-96f6-628924fed084-kube-api-access-qkj9r\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.857832 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c8d587-b429-415f-96f6-628924fed084-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.870611 4810 scope.go:117] "RemoveContainer" containerID="b7e5ea663f35522cf345ee4992587f93bc18717e336dd09c7687290a99a11aa3" Feb 19 15:31:56 crc kubenswrapper[4810]: E0219 15:31:56.871159 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7e5ea663f35522cf345ee4992587f93bc18717e336dd09c7687290a99a11aa3\": container with ID starting with b7e5ea663f35522cf345ee4992587f93bc18717e336dd09c7687290a99a11aa3 not found: ID does not exist" containerID="b7e5ea663f35522cf345ee4992587f93bc18717e336dd09c7687290a99a11aa3" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.871214 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7e5ea663f35522cf345ee4992587f93bc18717e336dd09c7687290a99a11aa3"} err="failed to get container status \"b7e5ea663f35522cf345ee4992587f93bc18717e336dd09c7687290a99a11aa3\": rpc error: code = NotFound desc = could not find container \"b7e5ea663f35522cf345ee4992587f93bc18717e336dd09c7687290a99a11aa3\": container with ID starting with b7e5ea663f35522cf345ee4992587f93bc18717e336dd09c7687290a99a11aa3 not found: ID does not exist" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.915377 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.937463 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.948276 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 15:31:56 crc kubenswrapper[4810]: E0219 15:31:56.948797 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c8d587-b429-415f-96f6-628924fed084" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.948823 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c8d587-b429-415f-96f6-628924fed084" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.949078 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c8d587-b429-415f-96f6-628924fed084" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.949903 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.958204 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.958462 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.958636 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 15:31:56 crc kubenswrapper[4810]: I0219 15:31:56.959188 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.061048 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96jc2\" (UniqueName: \"kubernetes.io/projected/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-kube-api-access-96jc2\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.061221 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.061270 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.061292 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.061381 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.163026 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.163083 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.163155 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.163208 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96jc2\" (UniqueName: \"kubernetes.io/projected/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-kube-api-access-96jc2\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.163717 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.167383 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.167585 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.167853 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.168646 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.183104 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96jc2\" (UniqueName: \"kubernetes.io/projected/5a7915d4-6c3f-4bc7-b21d-7d51b675640f-kube-api-access-96jc2\") pod \"nova-cell1-novncproxy-0\" (UID: \"5a7915d4-6c3f-4bc7-b21d-7d51b675640f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.284206 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.452290 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c8d587-b429-415f-96f6-628924fed084" path="/var/lib/kubelet/pods/b4c8d587-b429-415f-96f6-628924fed084/volumes" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.764035 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 15:31:57 crc kubenswrapper[4810]: W0219 15:31:57.764488 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a7915d4_6c3f_4bc7_b21d_7d51b675640f.slice/crio-c62dbfb3fef551c47f6ceabbedf6ce08e35cc8c56ffcc4089fb9748fdc5b32b3 WatchSource:0}: Error finding container c62dbfb3fef551c47f6ceabbedf6ce08e35cc8c56ffcc4089fb9748fdc5b32b3: Status 404 returned error can't find the container with id c62dbfb3fef551c47f6ceabbedf6ce08e35cc8c56ffcc4089fb9748fdc5b32b3 Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.848187 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5a7915d4-6c3f-4bc7-b21d-7d51b675640f","Type":"ContainerStarted","Data":"c62dbfb3fef551c47f6ceabbedf6ce08e35cc8c56ffcc4089fb9748fdc5b32b3"} Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.857345 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2594f4d-4c2f-4fc1-bda2-98a148e09b20","Type":"ContainerStarted","Data":"2267682407e573c3802691ada945d211fd7e4b15c6d2e2d22cf717dea7754b85"} Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.964919 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.965912 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.976135 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 15:31:57 crc kubenswrapper[4810]: I0219 15:31:57.996918 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 15:31:58 crc kubenswrapper[4810]: I0219 15:31:58.867567 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5a7915d4-6c3f-4bc7-b21d-7d51b675640f","Type":"ContainerStarted","Data":"bfe000bc0b31dfce709607fd585b87aa375cc4ed4721af49130544ac2515c529"} Feb 19 15:31:58 crc kubenswrapper[4810]: I0219 15:31:58.867803 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 15:31:58 crc kubenswrapper[4810]: I0219 15:31:58.879279 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 15:31:58 crc kubenswrapper[4810]: I0219 15:31:58.889895 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.889873281 podStartE2EDuration="2.889873281s" podCreationTimestamp="2026-02-19 15:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:31:58.883050297 +0000 UTC m=+1348.365080441" watchObservedRunningTime="2026-02-19 15:31:58.889873281 +0000 UTC m=+1348.371903415" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.081466 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dc9fb8849-t2gx5"] Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.082978 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.105220 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9xn9\" (UniqueName: \"kubernetes.io/projected/275a98c0-8e6a-4587-8628-54f70b836615-kube-api-access-t9xn9\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.106438 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.106631 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-config\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.106759 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.106834 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-dns-swift-storage-0\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.106949 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-dns-svc\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.127574 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dc9fb8849-t2gx5"] Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.209801 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-config\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.210486 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.210601 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-dns-swift-storage-0\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.210677 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-dns-svc\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.210786 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9xn9\" (UniqueName: \"kubernetes.io/projected/275a98c0-8e6a-4587-8628-54f70b836615-kube-api-access-t9xn9\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.210868 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.212014 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-dns-svc\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.212906 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-config\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.219060 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-dns-swift-storage-0\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.219149 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.219288 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.233268 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9xn9\" (UniqueName: \"kubernetes.io/projected/275a98c0-8e6a-4587-8628-54f70b836615-kube-api-access-t9xn9\") pod \"dnsmasq-dns-7dc9fb8849-t2gx5\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:31:59 crc kubenswrapper[4810]: I0219 15:31:59.416003 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:32:00 crc kubenswrapper[4810]: I0219 15:31:59.897474 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2594f4d-4c2f-4fc1-bda2-98a148e09b20","Type":"ContainerStarted","Data":"9125ed9339214110f69897810b0e48bb3128158f2c4b9ccb9353a19533ba604d"} Feb 19 15:32:00 crc kubenswrapper[4810]: I0219 15:31:59.898527 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 15:32:00 crc kubenswrapper[4810]: I0219 15:31:59.926980 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.493768033 podStartE2EDuration="5.92696275s" podCreationTimestamp="2026-02-19 15:31:54 +0000 UTC" firstStartedPulling="2026-02-19 15:31:55.742109337 +0000 UTC m=+1345.224139461" lastFinishedPulling="2026-02-19 15:31:59.175304054 +0000 UTC m=+1348.657334178" observedRunningTime="2026-02-19 15:31:59.920619309 +0000 UTC m=+1349.402649433" watchObservedRunningTime="2026-02-19 15:31:59.92696275 +0000 UTC m=+1349.408992864" Feb 19 15:32:00 crc kubenswrapper[4810]: W0219 15:32:00.020804 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod275a98c0_8e6a_4587_8628_54f70b836615.slice/crio-8c49db3d1d18d97ef799aa2fe03c13f1f635369a4b0477795ad010b6baa6d941 WatchSource:0}: Error finding container 8c49db3d1d18d97ef799aa2fe03c13f1f635369a4b0477795ad010b6baa6d941: Status 404 returned error can't find the container with id 8c49db3d1d18d97ef799aa2fe03c13f1f635369a4b0477795ad010b6baa6d941 Feb 19 15:32:00 crc kubenswrapper[4810]: I0219 15:32:00.021247 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dc9fb8849-t2gx5"] Feb 19 15:32:00 crc kubenswrapper[4810]: I0219 15:32:00.907471 4810 generic.go:334] "Generic (PLEG): container finished" podID="275a98c0-8e6a-4587-8628-54f70b836615" containerID="87b0b7637ffa6ceedf744033bed56aa64044ea8ed1fd841f1129813c92a9043f" exitCode=0 Feb 19 15:32:00 crc kubenswrapper[4810]: I0219 15:32:00.908542 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" event={"ID":"275a98c0-8e6a-4587-8628-54f70b836615","Type":"ContainerDied","Data":"87b0b7637ffa6ceedf744033bed56aa64044ea8ed1fd841f1129813c92a9043f"} Feb 19 15:32:00 crc kubenswrapper[4810]: I0219 15:32:00.908590 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" event={"ID":"275a98c0-8e6a-4587-8628-54f70b836615","Type":"ContainerStarted","Data":"8c49db3d1d18d97ef799aa2fe03c13f1f635369a4b0477795ad010b6baa6d941"} Feb 19 15:32:01 crc kubenswrapper[4810]: I0219 15:32:01.668865 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:32:01 crc kubenswrapper[4810]: I0219 15:32:01.918705 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" event={"ID":"275a98c0-8e6a-4587-8628-54f70b836615","Type":"ContainerStarted","Data":"ec7d0dde11f4d91dfc1709e9d373983bc5e0931480a8ae6b522e10ab70a1f7fc"} Feb 19 15:32:01 crc kubenswrapper[4810]: I0219 15:32:01.918863 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="db4be167-d56e-491f-851c-c21f30f63112" containerName="nova-api-log" containerID="cri-o://f3afc915b44f3696b28ac2a3768f5a557a2b647fe608482478f7471cf5736c24" gracePeriod=30 Feb 19 15:32:01 crc kubenswrapper[4810]: I0219 15:32:01.919349 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="db4be167-d56e-491f-851c-c21f30f63112" containerName="nova-api-api" containerID="cri-o://c92ab671905f1460e2a7dccb331259f6a59c52684a2e0df2a88b275d79bf21c4" gracePeriod=30 Feb 19 15:32:01 crc kubenswrapper[4810]: I0219 15:32:01.946444 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" podStartSLOduration=2.946428885 podStartE2EDuration="2.946428885s" podCreationTimestamp="2026-02-19 15:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:32:01.94114137 +0000 UTC m=+1351.423171514" watchObservedRunningTime="2026-02-19 15:32:01.946428885 +0000 UTC m=+1351.428459009" Feb 19 15:32:02 crc kubenswrapper[4810]: I0219 15:32:02.284713 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:32:02 crc kubenswrapper[4810]: I0219 15:32:02.928878 4810 generic.go:334] "Generic (PLEG): container finished" podID="db4be167-d56e-491f-851c-c21f30f63112" containerID="f3afc915b44f3696b28ac2a3768f5a557a2b647fe608482478f7471cf5736c24" exitCode=143 Feb 19 15:32:02 crc kubenswrapper[4810]: I0219 15:32:02.930104 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db4be167-d56e-491f-851c-c21f30f63112","Type":"ContainerDied","Data":"f3afc915b44f3696b28ac2a3768f5a557a2b647fe608482478f7471cf5736c24"} Feb 19 15:32:02 crc kubenswrapper[4810]: I0219 15:32:02.930138 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:32:03 crc kubenswrapper[4810]: I0219 15:32:03.318951 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:32:03 crc kubenswrapper[4810]: I0219 15:32:03.319210 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="ceilometer-central-agent" containerID="cri-o://b146e7e942b25643f6a8f23e7901f7c71009b2241073e62495a7a1015b18112c" gracePeriod=30 Feb 19 15:32:03 crc kubenswrapper[4810]: I0219 15:32:03.319652 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="proxy-httpd" containerID="cri-o://9125ed9339214110f69897810b0e48bb3128158f2c4b9ccb9353a19533ba604d" gracePeriod=30 Feb 19 15:32:03 crc kubenswrapper[4810]: I0219 15:32:03.319695 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="sg-core" containerID="cri-o://2267682407e573c3802691ada945d211fd7e4b15c6d2e2d22cf717dea7754b85" gracePeriod=30 Feb 19 15:32:03 crc kubenswrapper[4810]: I0219 15:32:03.319724 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="ceilometer-notification-agent" containerID="cri-o://d89267d6dac559d2a521953fb65466a02a167ef0f42f9337d1bc8f64f8e801a5" gracePeriod=30 Feb 19 15:32:03 crc kubenswrapper[4810]: I0219 15:32:03.944263 4810 generic.go:334] "Generic (PLEG): container finished" podID="db4be167-d56e-491f-851c-c21f30f63112" containerID="c92ab671905f1460e2a7dccb331259f6a59c52684a2e0df2a88b275d79bf21c4" exitCode=0 Feb 19 15:32:03 crc kubenswrapper[4810]: I0219 15:32:03.944298 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db4be167-d56e-491f-851c-c21f30f63112","Type":"ContainerDied","Data":"c92ab671905f1460e2a7dccb331259f6a59c52684a2e0df2a88b275d79bf21c4"} Feb 19 15:32:03 crc kubenswrapper[4810]: I0219 15:32:03.947708 4810 generic.go:334] "Generic (PLEG): container finished" podID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerID="9125ed9339214110f69897810b0e48bb3128158f2c4b9ccb9353a19533ba604d" exitCode=0 Feb 19 15:32:03 crc kubenswrapper[4810]: I0219 15:32:03.947734 4810 generic.go:334] "Generic (PLEG): container finished" podID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerID="2267682407e573c3802691ada945d211fd7e4b15c6d2e2d22cf717dea7754b85" exitCode=2 Feb 19 15:32:03 crc kubenswrapper[4810]: I0219 15:32:03.947744 4810 generic.go:334] "Generic (PLEG): container finished" podID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerID="d89267d6dac559d2a521953fb65466a02a167ef0f42f9337d1bc8f64f8e801a5" exitCode=0 Feb 19 15:32:03 crc kubenswrapper[4810]: I0219 15:32:03.947737 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2594f4d-4c2f-4fc1-bda2-98a148e09b20","Type":"ContainerDied","Data":"9125ed9339214110f69897810b0e48bb3128158f2c4b9ccb9353a19533ba604d"} Feb 19 15:32:03 crc kubenswrapper[4810]: I0219 15:32:03.947853 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2594f4d-4c2f-4fc1-bda2-98a148e09b20","Type":"ContainerDied","Data":"2267682407e573c3802691ada945d211fd7e4b15c6d2e2d22cf717dea7754b85"} Feb 19 15:32:03 crc kubenswrapper[4810]: I0219 15:32:03.947866 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2594f4d-4c2f-4fc1-bda2-98a148e09b20","Type":"ContainerDied","Data":"d89267d6dac559d2a521953fb65466a02a167ef0f42f9337d1bc8f64f8e801a5"} Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.031412 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.207554 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4be167-d56e-491f-851c-c21f30f63112-config-data\") pod \"db4be167-d56e-491f-851c-c21f30f63112\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.207852 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4be167-d56e-491f-851c-c21f30f63112-combined-ca-bundle\") pod \"db4be167-d56e-491f-851c-c21f30f63112\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.207980 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v46gz\" (UniqueName: \"kubernetes.io/projected/db4be167-d56e-491f-851c-c21f30f63112-kube-api-access-v46gz\") pod \"db4be167-d56e-491f-851c-c21f30f63112\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.208052 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4be167-d56e-491f-851c-c21f30f63112-logs\") pod \"db4be167-d56e-491f-851c-c21f30f63112\" (UID: \"db4be167-d56e-491f-851c-c21f30f63112\") " Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.209578 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db4be167-d56e-491f-851c-c21f30f63112-logs" (OuterVolumeSpecName: "logs") pod "db4be167-d56e-491f-851c-c21f30f63112" (UID: "db4be167-d56e-491f-851c-c21f30f63112"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.213164 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4be167-d56e-491f-851c-c21f30f63112-kube-api-access-v46gz" (OuterVolumeSpecName: "kube-api-access-v46gz") pod "db4be167-d56e-491f-851c-c21f30f63112" (UID: "db4be167-d56e-491f-851c-c21f30f63112"). InnerVolumeSpecName "kube-api-access-v46gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.243418 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4be167-d56e-491f-851c-c21f30f63112-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db4be167-d56e-491f-851c-c21f30f63112" (UID: "db4be167-d56e-491f-851c-c21f30f63112"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.255079 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4be167-d56e-491f-851c-c21f30f63112-config-data" (OuterVolumeSpecName: "config-data") pod "db4be167-d56e-491f-851c-c21f30f63112" (UID: "db4be167-d56e-491f-851c-c21f30f63112"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.311276 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v46gz\" (UniqueName: \"kubernetes.io/projected/db4be167-d56e-491f-851c-c21f30f63112-kube-api-access-v46gz\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.311309 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4be167-d56e-491f-851c-c21f30f63112-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.311320 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4be167-d56e-491f-851c-c21f30f63112-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.311394 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4be167-d56e-491f-851c-c21f30f63112-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.986832 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db4be167-d56e-491f-851c-c21f30f63112","Type":"ContainerDied","Data":"6b9772e1210b523a5eae56618cf676698a333f36d8eea10d60634c0ac175381c"} Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.986904 4810 scope.go:117] "RemoveContainer" containerID="c92ab671905f1460e2a7dccb331259f6a59c52684a2e0df2a88b275d79bf21c4" Feb 19 15:32:04 crc kubenswrapper[4810]: I0219 15:32:04.987092 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.067384 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.067951 4810 scope.go:117] "RemoveContainer" containerID="f3afc915b44f3696b28ac2a3768f5a557a2b647fe608482478f7471cf5736c24" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.105126 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.139882 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 15:32:05 crc kubenswrapper[4810]: E0219 15:32:05.140393 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4be167-d56e-491f-851c-c21f30f63112" containerName="nova-api-api" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.140411 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4be167-d56e-491f-851c-c21f30f63112" containerName="nova-api-api" Feb 19 15:32:05 crc kubenswrapper[4810]: E0219 15:32:05.140438 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4be167-d56e-491f-851c-c21f30f63112" containerName="nova-api-log" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.140444 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4be167-d56e-491f-851c-c21f30f63112" containerName="nova-api-log" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.140650 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4be167-d56e-491f-851c-c21f30f63112" containerName="nova-api-log" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.140664 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4be167-d56e-491f-851c-c21f30f63112" containerName="nova-api-api" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.141668 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.145244 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.150290 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.150547 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.153998 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.227788 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-config-data\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.227860 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.227893 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1120f3c4-1323-4ffe-8798-b15e58615278-logs\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.227925 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfvq8\" (UniqueName: \"kubernetes.io/projected/1120f3c4-1323-4ffe-8798-b15e58615278-kube-api-access-bfvq8\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.228206 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.228351 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-public-tls-certs\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.330962 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-public-tls-certs\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.331228 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-config-data\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.331299 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.331543 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1120f3c4-1323-4ffe-8798-b15e58615278-logs\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.331604 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfvq8\" (UniqueName: \"kubernetes.io/projected/1120f3c4-1323-4ffe-8798-b15e58615278-kube-api-access-bfvq8\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.331709 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.337317 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1120f3c4-1323-4ffe-8798-b15e58615278-logs\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.338941 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-public-tls-certs\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.342352 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-config-data\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.344001 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.346277 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.354851 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfvq8\" (UniqueName: \"kubernetes.io/projected/1120f3c4-1323-4ffe-8798-b15e58615278-kube-api-access-bfvq8\") pod \"nova-api-0\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.451828 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4be167-d56e-491f-851c-c21f30f63112" path="/var/lib/kubelet/pods/db4be167-d56e-491f-851c-c21f30f63112/volumes" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.465514 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.734235 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.843728 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27x4n\" (UniqueName: \"kubernetes.io/projected/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-kube-api-access-27x4n\") pod \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.843810 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-log-httpd\") pod \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.843873 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-ceilometer-tls-certs\") pod \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.843918 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-scripts\") pod \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.843957 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-config-data\") pod \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.844030 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-sg-core-conf-yaml\") pod \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.844057 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-combined-ca-bundle\") pod \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.844090 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-run-httpd\") pod \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\" (UID: \"f2594f4d-4c2f-4fc1-bda2-98a148e09b20\") " Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.844465 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f2594f4d-4c2f-4fc1-bda2-98a148e09b20" (UID: "f2594f4d-4c2f-4fc1-bda2-98a148e09b20"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.844505 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f2594f4d-4c2f-4fc1-bda2-98a148e09b20" (UID: "f2594f4d-4c2f-4fc1-bda2-98a148e09b20"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.849018 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-kube-api-access-27x4n" (OuterVolumeSpecName: "kube-api-access-27x4n") pod "f2594f4d-4c2f-4fc1-bda2-98a148e09b20" (UID: "f2594f4d-4c2f-4fc1-bda2-98a148e09b20"). InnerVolumeSpecName "kube-api-access-27x4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.852544 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-scripts" (OuterVolumeSpecName: "scripts") pod "f2594f4d-4c2f-4fc1-bda2-98a148e09b20" (UID: "f2594f4d-4c2f-4fc1-bda2-98a148e09b20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.873436 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f2594f4d-4c2f-4fc1-bda2-98a148e09b20" (UID: "f2594f4d-4c2f-4fc1-bda2-98a148e09b20"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.904382 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f2594f4d-4c2f-4fc1-bda2-98a148e09b20" (UID: "f2594f4d-4c2f-4fc1-bda2-98a148e09b20"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.948691 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27x4n\" (UniqueName: \"kubernetes.io/projected/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-kube-api-access-27x4n\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.949025 4810 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.949046 4810 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.949063 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.949081 4810 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.949098 4810 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.953559 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2594f4d-4c2f-4fc1-bda2-98a148e09b20" (UID: "f2594f4d-4c2f-4fc1-bda2-98a148e09b20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.977002 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:32:05 crc kubenswrapper[4810]: I0219 15:32:05.978255 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-config-data" (OuterVolumeSpecName: "config-data") pod "f2594f4d-4c2f-4fc1-bda2-98a148e09b20" (UID: "f2594f4d-4c2f-4fc1-bda2-98a148e09b20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:05 crc kubenswrapper[4810]: W0219 15:32:05.981377 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1120f3c4_1323_4ffe_8798_b15e58615278.slice/crio-5c8bcb9cab03480eccfcad397fcab4db26af8c7b42bbdd88a46bf05b9ed6f929 WatchSource:0}: Error finding container 5c8bcb9cab03480eccfcad397fcab4db26af8c7b42bbdd88a46bf05b9ed6f929: Status 404 returned error can't find the container with id 5c8bcb9cab03480eccfcad397fcab4db26af8c7b42bbdd88a46bf05b9ed6f929 Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.003023 4810 generic.go:334] "Generic (PLEG): container finished" podID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerID="b146e7e942b25643f6a8f23e7901f7c71009b2241073e62495a7a1015b18112c" exitCode=0 Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.003086 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2594f4d-4c2f-4fc1-bda2-98a148e09b20","Type":"ContainerDied","Data":"b146e7e942b25643f6a8f23e7901f7c71009b2241073e62495a7a1015b18112c"} Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.003114 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2594f4d-4c2f-4fc1-bda2-98a148e09b20","Type":"ContainerDied","Data":"c1f4a9d4ae39c71c4f7ab55984edc9f42ceec7ca748ce3c8a863c795e3882a8e"} Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.003134 4810 scope.go:117] "RemoveContainer" containerID="9125ed9339214110f69897810b0e48bb3128158f2c4b9ccb9353a19533ba604d" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.003298 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.013207 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1120f3c4-1323-4ffe-8798-b15e58615278","Type":"ContainerStarted","Data":"5c8bcb9cab03480eccfcad397fcab4db26af8c7b42bbdd88a46bf05b9ed6f929"} Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.051419 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.051452 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2594f4d-4c2f-4fc1-bda2-98a148e09b20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.056694 4810 scope.go:117] "RemoveContainer" containerID="2267682407e573c3802691ada945d211fd7e4b15c6d2e2d22cf717dea7754b85" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.058310 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.067393 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.076856 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:32:06 crc kubenswrapper[4810]: E0219 15:32:06.077263 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="proxy-httpd" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.077278 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="proxy-httpd" Feb 19 15:32:06 crc kubenswrapper[4810]: E0219 15:32:06.077497 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="sg-core" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.077514 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="sg-core" Feb 19 15:32:06 crc kubenswrapper[4810]: E0219 15:32:06.077564 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="ceilometer-notification-agent" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.077572 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="ceilometer-notification-agent" Feb 19 15:32:06 crc kubenswrapper[4810]: E0219 15:32:06.077585 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="ceilometer-central-agent" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.077592 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="ceilometer-central-agent" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.077861 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="proxy-httpd" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.077881 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="ceilometer-central-agent" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.077893 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="sg-core" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.077909 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" containerName="ceilometer-notification-agent" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.079936 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.084803 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.085033 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.095757 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.123416 4810 scope.go:117] "RemoveContainer" containerID="d89267d6dac559d2a521953fb65466a02a167ef0f42f9337d1bc8f64f8e801a5" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.126722 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.165468 4810 scope.go:117] "RemoveContainer" containerID="b146e7e942b25643f6a8f23e7901f7c71009b2241073e62495a7a1015b18112c" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.204258 4810 scope.go:117] "RemoveContainer" containerID="9125ed9339214110f69897810b0e48bb3128158f2c4b9ccb9353a19533ba604d" Feb 19 15:32:06 crc kubenswrapper[4810]: E0219 15:32:06.205088 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9125ed9339214110f69897810b0e48bb3128158f2c4b9ccb9353a19533ba604d\": container with ID starting with 9125ed9339214110f69897810b0e48bb3128158f2c4b9ccb9353a19533ba604d not found: ID does not exist" containerID="9125ed9339214110f69897810b0e48bb3128158f2c4b9ccb9353a19533ba604d" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.205134 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9125ed9339214110f69897810b0e48bb3128158f2c4b9ccb9353a19533ba604d"} err="failed to get container status \"9125ed9339214110f69897810b0e48bb3128158f2c4b9ccb9353a19533ba604d\": rpc error: code = NotFound desc = could not find container \"9125ed9339214110f69897810b0e48bb3128158f2c4b9ccb9353a19533ba604d\": container with ID starting with 9125ed9339214110f69897810b0e48bb3128158f2c4b9ccb9353a19533ba604d not found: ID does not exist" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.205154 4810 scope.go:117] "RemoveContainer" containerID="2267682407e573c3802691ada945d211fd7e4b15c6d2e2d22cf717dea7754b85" Feb 19 15:32:06 crc kubenswrapper[4810]: E0219 15:32:06.206353 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2267682407e573c3802691ada945d211fd7e4b15c6d2e2d22cf717dea7754b85\": container with ID starting with 2267682407e573c3802691ada945d211fd7e4b15c6d2e2d22cf717dea7754b85 not found: ID does not exist" containerID="2267682407e573c3802691ada945d211fd7e4b15c6d2e2d22cf717dea7754b85" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.206392 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2267682407e573c3802691ada945d211fd7e4b15c6d2e2d22cf717dea7754b85"} err="failed to get container status \"2267682407e573c3802691ada945d211fd7e4b15c6d2e2d22cf717dea7754b85\": rpc error: code = NotFound desc = could not find container \"2267682407e573c3802691ada945d211fd7e4b15c6d2e2d22cf717dea7754b85\": container with ID starting with 2267682407e573c3802691ada945d211fd7e4b15c6d2e2d22cf717dea7754b85 not found: ID does not exist" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.206405 4810 scope.go:117] "RemoveContainer" containerID="d89267d6dac559d2a521953fb65466a02a167ef0f42f9337d1bc8f64f8e801a5" Feb 19 15:32:06 crc kubenswrapper[4810]: E0219 15:32:06.206779 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d89267d6dac559d2a521953fb65466a02a167ef0f42f9337d1bc8f64f8e801a5\": container with ID starting with d89267d6dac559d2a521953fb65466a02a167ef0f42f9337d1bc8f64f8e801a5 not found: ID does not exist" containerID="d89267d6dac559d2a521953fb65466a02a167ef0f42f9337d1bc8f64f8e801a5" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.206821 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d89267d6dac559d2a521953fb65466a02a167ef0f42f9337d1bc8f64f8e801a5"} err="failed to get container status \"d89267d6dac559d2a521953fb65466a02a167ef0f42f9337d1bc8f64f8e801a5\": rpc error: code = NotFound desc = could not find container \"d89267d6dac559d2a521953fb65466a02a167ef0f42f9337d1bc8f64f8e801a5\": container with ID starting with d89267d6dac559d2a521953fb65466a02a167ef0f42f9337d1bc8f64f8e801a5 not found: ID does not exist" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.206837 4810 scope.go:117] "RemoveContainer" containerID="b146e7e942b25643f6a8f23e7901f7c71009b2241073e62495a7a1015b18112c" Feb 19 15:32:06 crc kubenswrapper[4810]: E0219 15:32:06.207499 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b146e7e942b25643f6a8f23e7901f7c71009b2241073e62495a7a1015b18112c\": container with ID starting with b146e7e942b25643f6a8f23e7901f7c71009b2241073e62495a7a1015b18112c not found: ID does not exist" containerID="b146e7e942b25643f6a8f23e7901f7c71009b2241073e62495a7a1015b18112c" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.207558 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b146e7e942b25643f6a8f23e7901f7c71009b2241073e62495a7a1015b18112c"} err="failed to get container status \"b146e7e942b25643f6a8f23e7901f7c71009b2241073e62495a7a1015b18112c\": rpc error: code = NotFound desc = could not find container \"b146e7e942b25643f6a8f23e7901f7c71009b2241073e62495a7a1015b18112c\": container with ID starting with b146e7e942b25643f6a8f23e7901f7c71009b2241073e62495a7a1015b18112c not found: ID does not exist" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.254789 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7d91a4d-5b61-404e-a58b-cb426722f883-log-httpd\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.254979 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7d91a4d-5b61-404e-a58b-cb426722f883-run-httpd\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.255032 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js5t4\" (UniqueName: \"kubernetes.io/projected/e7d91a4d-5b61-404e-a58b-cb426722f883-kube-api-access-js5t4\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.255055 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-scripts\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.255208 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.255383 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-config-data\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.255446 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.256028 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.358083 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.358183 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-config-data\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.358215 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.358298 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.358365 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7d91a4d-5b61-404e-a58b-cb426722f883-log-httpd\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.358462 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7d91a4d-5b61-404e-a58b-cb426722f883-run-httpd\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.358494 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js5t4\" (UniqueName: \"kubernetes.io/projected/e7d91a4d-5b61-404e-a58b-cb426722f883-kube-api-access-js5t4\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.358515 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-scripts\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.359050 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7d91a4d-5b61-404e-a58b-cb426722f883-log-httpd\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.359444 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7d91a4d-5b61-404e-a58b-cb426722f883-run-httpd\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.361770 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-config-data\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.362311 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-scripts\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.362670 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.362843 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.369386 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d91a4d-5b61-404e-a58b-cb426722f883-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.380054 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js5t4\" (UniqueName: \"kubernetes.io/projected/e7d91a4d-5b61-404e-a58b-cb426722f883-kube-api-access-js5t4\") pod \"ceilometer-0\" (UID: \"e7d91a4d-5b61-404e-a58b-cb426722f883\") " pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.455856 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 15:32:06 crc kubenswrapper[4810]: I0219 15:32:06.951284 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 15:32:07 crc kubenswrapper[4810]: I0219 15:32:07.024209 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1120f3c4-1323-4ffe-8798-b15e58615278","Type":"ContainerStarted","Data":"83dcf28c966753013525c88cc66000052f7461b478620e9cca2052eda9edb662"} Feb 19 15:32:07 crc kubenswrapper[4810]: I0219 15:32:07.025204 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1120f3c4-1323-4ffe-8798-b15e58615278","Type":"ContainerStarted","Data":"bfdb849b75abf04adb5f644e93c3f65364ac03de4b6998e5eb22cf3fb4118b78"} Feb 19 15:32:07 crc kubenswrapper[4810]: I0219 15:32:07.032042 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7d91a4d-5b61-404e-a58b-cb426722f883","Type":"ContainerStarted","Data":"61cf75ecf53b55c33c98212c5ad0ab385ad4652e0c69073e9d20e36a8f190b5e"} Feb 19 15:32:07 crc kubenswrapper[4810]: I0219 15:32:07.057791 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.057767812 podStartE2EDuration="2.057767812s" podCreationTimestamp="2026-02-19 15:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:32:07.050368364 +0000 UTC m=+1356.532398508" watchObservedRunningTime="2026-02-19 15:32:07.057767812 +0000 UTC m=+1356.539797936" Feb 19 15:32:07 crc kubenswrapper[4810]: I0219 15:32:07.284943 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:32:07 crc kubenswrapper[4810]: I0219 15:32:07.320102 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:32:07 crc kubenswrapper[4810]: I0219 15:32:07.451677 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2594f4d-4c2f-4fc1-bda2-98a148e09b20" path="/var/lib/kubelet/pods/f2594f4d-4c2f-4fc1-bda2-98a148e09b20/volumes" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.044797 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7d91a4d-5b61-404e-a58b-cb426722f883","Type":"ContainerStarted","Data":"ac6990e103b1b508ec962576c241925fc6ac704d611478cd1f6d2a4840873610"} Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.045147 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7d91a4d-5b61-404e-a58b-cb426722f883","Type":"ContainerStarted","Data":"e65dc6917279ed73ebbc95d0132561f41dffb02e119473230cd2e9f6204ca6ba"} Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.063455 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.204281 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-hf7qg"] Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.205637 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.207685 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.207844 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.223374 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hf7qg"] Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.297036 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cftj\" (UniqueName: \"kubernetes.io/projected/f834f671-3add-4bfc-8152-596d66e90f22-kube-api-access-2cftj\") pod \"nova-cell1-cell-mapping-hf7qg\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.297132 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hf7qg\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.297173 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-scripts\") pod \"nova-cell1-cell-mapping-hf7qg\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.297241 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-config-data\") pod \"nova-cell1-cell-mapping-hf7qg\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.398712 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-config-data\") pod \"nova-cell1-cell-mapping-hf7qg\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.398818 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cftj\" (UniqueName: \"kubernetes.io/projected/f834f671-3add-4bfc-8152-596d66e90f22-kube-api-access-2cftj\") pod \"nova-cell1-cell-mapping-hf7qg\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.398881 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hf7qg\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.398917 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-scripts\") pod \"nova-cell1-cell-mapping-hf7qg\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.403893 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-config-data\") pod \"nova-cell1-cell-mapping-hf7qg\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.404067 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-scripts\") pod \"nova-cell1-cell-mapping-hf7qg\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.404555 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hf7qg\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.420717 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cftj\" (UniqueName: \"kubernetes.io/projected/f834f671-3add-4bfc-8152-596d66e90f22-kube-api-access-2cftj\") pod \"nova-cell1-cell-mapping-hf7qg\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.523694 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:08 crc kubenswrapper[4810]: W0219 15:32:08.995009 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf834f671_3add_4bfc_8152_596d66e90f22.slice/crio-fed110f1ba52d2ca42d9ab496f60368740e3e263fc2110696241568bca816238 WatchSource:0}: Error finding container fed110f1ba52d2ca42d9ab496f60368740e3e263fc2110696241568bca816238: Status 404 returned error can't find the container with id fed110f1ba52d2ca42d9ab496f60368740e3e263fc2110696241568bca816238 Feb 19 15:32:08 crc kubenswrapper[4810]: I0219 15:32:08.998853 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hf7qg"] Feb 19 15:32:09 crc kubenswrapper[4810]: I0219 15:32:09.065462 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hf7qg" event={"ID":"f834f671-3add-4bfc-8152-596d66e90f22","Type":"ContainerStarted","Data":"fed110f1ba52d2ca42d9ab496f60368740e3e263fc2110696241568bca816238"} Feb 19 15:32:09 crc kubenswrapper[4810]: I0219 15:32:09.069495 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7d91a4d-5b61-404e-a58b-cb426722f883","Type":"ContainerStarted","Data":"6620634cbef3df55485b66447dcd8e726560a5938873cf32536f20c8de70ee50"} Feb 19 15:32:09 crc kubenswrapper[4810]: I0219 15:32:09.417554 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:32:09 crc kubenswrapper[4810]: I0219 15:32:09.541805 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cc76f8d79-b9r9k"] Feb 19 15:32:09 crc kubenswrapper[4810]: I0219 15:32:09.542494 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" podUID="d13abd15-5b9b-4e00-984f-9dabbe51ddbc" containerName="dnsmasq-dns" containerID="cri-o://968b59a59d30b2d92d2973ce319917f2d174a7454d4003dff6b2c557a24c3a76" gracePeriod=10 Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.086313 4810 generic.go:334] "Generic (PLEG): container finished" podID="d13abd15-5b9b-4e00-984f-9dabbe51ddbc" containerID="968b59a59d30b2d92d2973ce319917f2d174a7454d4003dff6b2c557a24c3a76" exitCode=0 Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.086524 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" event={"ID":"d13abd15-5b9b-4e00-984f-9dabbe51ddbc","Type":"ContainerDied","Data":"968b59a59d30b2d92d2973ce319917f2d174a7454d4003dff6b2c557a24c3a76"} Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.097604 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hf7qg" event={"ID":"f834f671-3add-4bfc-8152-596d66e90f22","Type":"ContainerStarted","Data":"eb023c9cd6a803467c474cff2a48a3d6536859d9cdbc3785ab4eb9814aa6c925"} Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.122519 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-hf7qg" podStartSLOduration=2.122503524 podStartE2EDuration="2.122503524s" podCreationTimestamp="2026-02-19 15:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:32:10.121436057 +0000 UTC m=+1359.603466181" watchObservedRunningTime="2026-02-19 15:32:10.122503524 +0000 UTC m=+1359.604533648" Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.193858 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.236749 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-config\") pod \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.236805 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-dns-swift-storage-0\") pod \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.236997 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-dns-svc\") pod \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.237051 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-ovsdbserver-sb\") pod \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.237121 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-ovsdbserver-nb\") pod \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.237180 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdf9n\" (UniqueName: \"kubernetes.io/projected/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-kube-api-access-fdf9n\") pod \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\" (UID: \"d13abd15-5b9b-4e00-984f-9dabbe51ddbc\") " Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.269550 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-kube-api-access-fdf9n" (OuterVolumeSpecName: "kube-api-access-fdf9n") pod "d13abd15-5b9b-4e00-984f-9dabbe51ddbc" (UID: "d13abd15-5b9b-4e00-984f-9dabbe51ddbc"). InnerVolumeSpecName "kube-api-access-fdf9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.306569 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d13abd15-5b9b-4e00-984f-9dabbe51ddbc" (UID: "d13abd15-5b9b-4e00-984f-9dabbe51ddbc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.314160 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d13abd15-5b9b-4e00-984f-9dabbe51ddbc" (UID: "d13abd15-5b9b-4e00-984f-9dabbe51ddbc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.320550 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-config" (OuterVolumeSpecName: "config") pod "d13abd15-5b9b-4e00-984f-9dabbe51ddbc" (UID: "d13abd15-5b9b-4e00-984f-9dabbe51ddbc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.340295 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.340406 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.340420 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdf9n\" (UniqueName: \"kubernetes.io/projected/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-kube-api-access-fdf9n\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.340452 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.342959 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d13abd15-5b9b-4e00-984f-9dabbe51ddbc" (UID: "d13abd15-5b9b-4e00-984f-9dabbe51ddbc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.356987 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d13abd15-5b9b-4e00-984f-9dabbe51ddbc" (UID: "d13abd15-5b9b-4e00-984f-9dabbe51ddbc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.442469 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:10 crc kubenswrapper[4810]: I0219 15:32:10.442504 4810 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d13abd15-5b9b-4e00-984f-9dabbe51ddbc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:11 crc kubenswrapper[4810]: I0219 15:32:11.109313 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7d91a4d-5b61-404e-a58b-cb426722f883","Type":"ContainerStarted","Data":"0648552c36d1af1b785b524b2a94ea4f2718e1cc9b77f90f7db7a36e665bc3ad"} Feb 19 15:32:11 crc kubenswrapper[4810]: I0219 15:32:11.109481 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 15:32:11 crc kubenswrapper[4810]: I0219 15:32:11.114376 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" Feb 19 15:32:11 crc kubenswrapper[4810]: I0219 15:32:11.114429 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc76f8d79-b9r9k" event={"ID":"d13abd15-5b9b-4e00-984f-9dabbe51ddbc","Type":"ContainerDied","Data":"3782f704a1deb319907b16927aac1f196131400ef56fc6f3b4943495c1de3d0b"} Feb 19 15:32:11 crc kubenswrapper[4810]: I0219 15:32:11.114496 4810 scope.go:117] "RemoveContainer" containerID="968b59a59d30b2d92d2973ce319917f2d174a7454d4003dff6b2c557a24c3a76" Feb 19 15:32:11 crc kubenswrapper[4810]: I0219 15:32:11.162841 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.25907054 podStartE2EDuration="5.162811315s" podCreationTimestamp="2026-02-19 15:32:06 +0000 UTC" firstStartedPulling="2026-02-19 15:32:06.953846308 +0000 UTC m=+1356.435876432" lastFinishedPulling="2026-02-19 15:32:09.857587083 +0000 UTC m=+1359.339617207" observedRunningTime="2026-02-19 15:32:11.138485516 +0000 UTC m=+1360.620515650" watchObservedRunningTime="2026-02-19 15:32:11.162811315 +0000 UTC m=+1360.644841439" Feb 19 15:32:11 crc kubenswrapper[4810]: I0219 15:32:11.166487 4810 scope.go:117] "RemoveContainer" containerID="f762a8738d3f47e401158c999773d4f19a8cf6bd6c7936ab82cb0c741248ad3e" Feb 19 15:32:11 crc kubenswrapper[4810]: I0219 15:32:11.177230 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cc76f8d79-b9r9k"] Feb 19 15:32:11 crc kubenswrapper[4810]: I0219 15:32:11.187704 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cc76f8d79-b9r9k"] Feb 19 15:32:11 crc kubenswrapper[4810]: I0219 15:32:11.453277 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d13abd15-5b9b-4e00-984f-9dabbe51ddbc" path="/var/lib/kubelet/pods/d13abd15-5b9b-4e00-984f-9dabbe51ddbc/volumes" Feb 19 15:32:15 crc kubenswrapper[4810]: I0219 15:32:15.159771 4810 generic.go:334] "Generic (PLEG): container finished" podID="f834f671-3add-4bfc-8152-596d66e90f22" containerID="eb023c9cd6a803467c474cff2a48a3d6536859d9cdbc3785ab4eb9814aa6c925" exitCode=0 Feb 19 15:32:15 crc kubenswrapper[4810]: I0219 15:32:15.159891 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hf7qg" event={"ID":"f834f671-3add-4bfc-8152-596d66e90f22","Type":"ContainerDied","Data":"eb023c9cd6a803467c474cff2a48a3d6536859d9cdbc3785ab4eb9814aa6c925"} Feb 19 15:32:15 crc kubenswrapper[4810]: I0219 15:32:15.466508 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 15:32:15 crc kubenswrapper[4810]: I0219 15:32:15.466553 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.481642 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1120f3c4-1323-4ffe-8798-b15e58615278" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.481675 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1120f3c4-1323-4ffe-8798-b15e58615278" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.570013 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.664892 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-config-data\") pod \"f834f671-3add-4bfc-8152-596d66e90f22\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.664971 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-scripts\") pod \"f834f671-3add-4bfc-8152-596d66e90f22\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.665073 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-combined-ca-bundle\") pod \"f834f671-3add-4bfc-8152-596d66e90f22\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.665169 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cftj\" (UniqueName: \"kubernetes.io/projected/f834f671-3add-4bfc-8152-596d66e90f22-kube-api-access-2cftj\") pod \"f834f671-3add-4bfc-8152-596d66e90f22\" (UID: \"f834f671-3add-4bfc-8152-596d66e90f22\") " Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.670526 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f834f671-3add-4bfc-8152-596d66e90f22-kube-api-access-2cftj" (OuterVolumeSpecName: "kube-api-access-2cftj") pod "f834f671-3add-4bfc-8152-596d66e90f22" (UID: "f834f671-3add-4bfc-8152-596d66e90f22"). InnerVolumeSpecName "kube-api-access-2cftj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.677543 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-scripts" (OuterVolumeSpecName: "scripts") pod "f834f671-3add-4bfc-8152-596d66e90f22" (UID: "f834f671-3add-4bfc-8152-596d66e90f22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.701439 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f834f671-3add-4bfc-8152-596d66e90f22" (UID: "f834f671-3add-4bfc-8152-596d66e90f22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.707519 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-config-data" (OuterVolumeSpecName: "config-data") pod "f834f671-3add-4bfc-8152-596d66e90f22" (UID: "f834f671-3add-4bfc-8152-596d66e90f22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.779871 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.779903 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.779913 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f834f671-3add-4bfc-8152-596d66e90f22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:16 crc kubenswrapper[4810]: I0219 15:32:16.779923 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cftj\" (UniqueName: \"kubernetes.io/projected/f834f671-3add-4bfc-8152-596d66e90f22-kube-api-access-2cftj\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:17 crc kubenswrapper[4810]: I0219 15:32:17.196649 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hf7qg" event={"ID":"f834f671-3add-4bfc-8152-596d66e90f22","Type":"ContainerDied","Data":"fed110f1ba52d2ca42d9ab496f60368740e3e263fc2110696241568bca816238"} Feb 19 15:32:17 crc kubenswrapper[4810]: I0219 15:32:17.197045 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fed110f1ba52d2ca42d9ab496f60368740e3e263fc2110696241568bca816238" Feb 19 15:32:17 crc kubenswrapper[4810]: I0219 15:32:17.196770 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hf7qg" Feb 19 15:32:17 crc kubenswrapper[4810]: I0219 15:32:17.376199 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:32:17 crc kubenswrapper[4810]: I0219 15:32:17.376482 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1120f3c4-1323-4ffe-8798-b15e58615278" containerName="nova-api-log" containerID="cri-o://bfdb849b75abf04adb5f644e93c3f65364ac03de4b6998e5eb22cf3fb4118b78" gracePeriod=30 Feb 19 15:32:17 crc kubenswrapper[4810]: I0219 15:32:17.376827 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1120f3c4-1323-4ffe-8798-b15e58615278" containerName="nova-api-api" containerID="cri-o://83dcf28c966753013525c88cc66000052f7461b478620e9cca2052eda9edb662" gracePeriod=30 Feb 19 15:32:17 crc kubenswrapper[4810]: I0219 15:32:17.419592 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:32:17 crc kubenswrapper[4810]: I0219 15:32:17.419817 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9776a876-19db-446c-a7bf-d6fe0111d7b8" containerName="nova-scheduler-scheduler" containerID="cri-o://9050cf4664528bc43d0a9883f3bb240ba2cdcad9a8907febfcebd4f66322d6a0" gracePeriod=30 Feb 19 15:32:17 crc kubenswrapper[4810]: I0219 15:32:17.466967 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:32:17 crc kubenswrapper[4810]: I0219 15:32:17.467359 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="deba4978-0921-4c62-9452-9a47fe24feb7" containerName="nova-metadata-metadata" containerID="cri-o://3cd69082a7ac9af779601af9a54ce467c63853d8132eb18f4aa9ff5eb2315f95" gracePeriod=30 Feb 19 15:32:17 crc kubenswrapper[4810]: I0219 15:32:17.467549 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="deba4978-0921-4c62-9452-9a47fe24feb7" containerName="nova-metadata-log" containerID="cri-o://acc612b198e2bb735041a74b3bf438f971ed76411bde0e76a3b0dd4d3a920c17" gracePeriod=30 Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.208866 4810 generic.go:334] "Generic (PLEG): container finished" podID="deba4978-0921-4c62-9452-9a47fe24feb7" containerID="acc612b198e2bb735041a74b3bf438f971ed76411bde0e76a3b0dd4d3a920c17" exitCode=143 Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.208958 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"deba4978-0921-4c62-9452-9a47fe24feb7","Type":"ContainerDied","Data":"acc612b198e2bb735041a74b3bf438f971ed76411bde0e76a3b0dd4d3a920c17"} Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.212075 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1120f3c4-1323-4ffe-8798-b15e58615278","Type":"ContainerDied","Data":"bfdb849b75abf04adb5f644e93c3f65364ac03de4b6998e5eb22cf3fb4118b78"} Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.212098 4810 generic.go:334] "Generic (PLEG): container finished" podID="1120f3c4-1323-4ffe-8798-b15e58615278" containerID="bfdb849b75abf04adb5f644e93c3f65364ac03de4b6998e5eb22cf3fb4118b78" exitCode=143 Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.214417 4810 generic.go:334] "Generic (PLEG): container finished" podID="9776a876-19db-446c-a7bf-d6fe0111d7b8" containerID="9050cf4664528bc43d0a9883f3bb240ba2cdcad9a8907febfcebd4f66322d6a0" exitCode=0 Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.214447 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9776a876-19db-446c-a7bf-d6fe0111d7b8","Type":"ContainerDied","Data":"9050cf4664528bc43d0a9883f3bb240ba2cdcad9a8907febfcebd4f66322d6a0"} Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.452027 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.511128 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9776a876-19db-446c-a7bf-d6fe0111d7b8-config-data\") pod \"9776a876-19db-446c-a7bf-d6fe0111d7b8\" (UID: \"9776a876-19db-446c-a7bf-d6fe0111d7b8\") " Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.511198 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9776a876-19db-446c-a7bf-d6fe0111d7b8-combined-ca-bundle\") pod \"9776a876-19db-446c-a7bf-d6fe0111d7b8\" (UID: \"9776a876-19db-446c-a7bf-d6fe0111d7b8\") " Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.511317 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4rpb\" (UniqueName: \"kubernetes.io/projected/9776a876-19db-446c-a7bf-d6fe0111d7b8-kube-api-access-f4rpb\") pod \"9776a876-19db-446c-a7bf-d6fe0111d7b8\" (UID: \"9776a876-19db-446c-a7bf-d6fe0111d7b8\") " Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.519666 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9776a876-19db-446c-a7bf-d6fe0111d7b8-kube-api-access-f4rpb" (OuterVolumeSpecName: "kube-api-access-f4rpb") pod "9776a876-19db-446c-a7bf-d6fe0111d7b8" (UID: "9776a876-19db-446c-a7bf-d6fe0111d7b8"). InnerVolumeSpecName "kube-api-access-f4rpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.557700 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9776a876-19db-446c-a7bf-d6fe0111d7b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9776a876-19db-446c-a7bf-d6fe0111d7b8" (UID: "9776a876-19db-446c-a7bf-d6fe0111d7b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.604840 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9776a876-19db-446c-a7bf-d6fe0111d7b8-config-data" (OuterVolumeSpecName: "config-data") pod "9776a876-19db-446c-a7bf-d6fe0111d7b8" (UID: "9776a876-19db-446c-a7bf-d6fe0111d7b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.613245 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9776a876-19db-446c-a7bf-d6fe0111d7b8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.613275 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9776a876-19db-446c-a7bf-d6fe0111d7b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.613286 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4rpb\" (UniqueName: \"kubernetes.io/projected/9776a876-19db-446c-a7bf-d6fe0111d7b8-kube-api-access-f4rpb\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.824035 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.918567 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-combined-ca-bundle\") pod \"deba4978-0921-4c62-9452-9a47fe24feb7\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.918625 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-nova-metadata-tls-certs\") pod \"deba4978-0921-4c62-9452-9a47fe24feb7\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.918752 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-config-data\") pod \"deba4978-0921-4c62-9452-9a47fe24feb7\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.918790 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deba4978-0921-4c62-9452-9a47fe24feb7-logs\") pod \"deba4978-0921-4c62-9452-9a47fe24feb7\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.918868 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4dgr\" (UniqueName: \"kubernetes.io/projected/deba4978-0921-4c62-9452-9a47fe24feb7-kube-api-access-d4dgr\") pod \"deba4978-0921-4c62-9452-9a47fe24feb7\" (UID: \"deba4978-0921-4c62-9452-9a47fe24feb7\") " Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.919747 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deba4978-0921-4c62-9452-9a47fe24feb7-logs" (OuterVolumeSpecName: "logs") pod "deba4978-0921-4c62-9452-9a47fe24feb7" (UID: "deba4978-0921-4c62-9452-9a47fe24feb7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.923161 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deba4978-0921-4c62-9452-9a47fe24feb7-kube-api-access-d4dgr" (OuterVolumeSpecName: "kube-api-access-d4dgr") pod "deba4978-0921-4c62-9452-9a47fe24feb7" (UID: "deba4978-0921-4c62-9452-9a47fe24feb7"). InnerVolumeSpecName "kube-api-access-d4dgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.943574 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deba4978-0921-4c62-9452-9a47fe24feb7" (UID: "deba4978-0921-4c62-9452-9a47fe24feb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.947736 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-config-data" (OuterVolumeSpecName: "config-data") pod "deba4978-0921-4c62-9452-9a47fe24feb7" (UID: "deba4978-0921-4c62-9452-9a47fe24feb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:18 crc kubenswrapper[4810]: I0219 15:32:18.991650 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "deba4978-0921-4c62-9452-9a47fe24feb7" (UID: "deba4978-0921-4c62-9452-9a47fe24feb7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.021592 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.021639 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deba4978-0921-4c62-9452-9a47fe24feb7-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.021650 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4dgr\" (UniqueName: \"kubernetes.io/projected/deba4978-0921-4c62-9452-9a47fe24feb7-kube-api-access-d4dgr\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.021661 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.021672 4810 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/deba4978-0921-4c62-9452-9a47fe24feb7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.225197 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.225181 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9776a876-19db-446c-a7bf-d6fe0111d7b8","Type":"ContainerDied","Data":"e26af5d090a2b6422428d4303d703f7d22f7771910d627d6bbd17faed1d8906a"} Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.225523 4810 scope.go:117] "RemoveContainer" containerID="9050cf4664528bc43d0a9883f3bb240ba2cdcad9a8907febfcebd4f66322d6a0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.227815 4810 generic.go:334] "Generic (PLEG): container finished" podID="deba4978-0921-4c62-9452-9a47fe24feb7" containerID="3cd69082a7ac9af779601af9a54ce467c63853d8132eb18f4aa9ff5eb2315f95" exitCode=0 Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.227855 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"deba4978-0921-4c62-9452-9a47fe24feb7","Type":"ContainerDied","Data":"3cd69082a7ac9af779601af9a54ce467c63853d8132eb18f4aa9ff5eb2315f95"} Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.227884 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"deba4978-0921-4c62-9452-9a47fe24feb7","Type":"ContainerDied","Data":"9c0cc70373e1f5deebdb6c556b83e6477c32297bae250fc5ee8217259477409e"} Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.227937 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.255071 4810 scope.go:117] "RemoveContainer" containerID="3cd69082a7ac9af779601af9a54ce467c63853d8132eb18f4aa9ff5eb2315f95" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.266137 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.280804 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.288162 4810 scope.go:117] "RemoveContainer" containerID="acc612b198e2bb735041a74b3bf438f971ed76411bde0e76a3b0dd4d3a920c17" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.293312 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.311953 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:32:19 crc kubenswrapper[4810]: E0219 15:32:19.312572 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deba4978-0921-4c62-9452-9a47fe24feb7" containerName="nova-metadata-metadata" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.312600 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="deba4978-0921-4c62-9452-9a47fe24feb7" containerName="nova-metadata-metadata" Feb 19 15:32:19 crc kubenswrapper[4810]: E0219 15:32:19.312632 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f834f671-3add-4bfc-8152-596d66e90f22" containerName="nova-manage" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.312641 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f834f671-3add-4bfc-8152-596d66e90f22" containerName="nova-manage" Feb 19 15:32:19 crc kubenswrapper[4810]: E0219 15:32:19.312668 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deba4978-0921-4c62-9452-9a47fe24feb7" containerName="nova-metadata-log" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.312677 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="deba4978-0921-4c62-9452-9a47fe24feb7" containerName="nova-metadata-log" Feb 19 15:32:19 crc kubenswrapper[4810]: E0219 15:32:19.312689 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d13abd15-5b9b-4e00-984f-9dabbe51ddbc" containerName="init" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.312697 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13abd15-5b9b-4e00-984f-9dabbe51ddbc" containerName="init" Feb 19 15:32:19 crc kubenswrapper[4810]: E0219 15:32:19.312716 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9776a876-19db-446c-a7bf-d6fe0111d7b8" containerName="nova-scheduler-scheduler" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.312724 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9776a876-19db-446c-a7bf-d6fe0111d7b8" containerName="nova-scheduler-scheduler" Feb 19 15:32:19 crc kubenswrapper[4810]: E0219 15:32:19.312741 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d13abd15-5b9b-4e00-984f-9dabbe51ddbc" containerName="dnsmasq-dns" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.312749 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13abd15-5b9b-4e00-984f-9dabbe51ddbc" containerName="dnsmasq-dns" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.312983 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="d13abd15-5b9b-4e00-984f-9dabbe51ddbc" containerName="dnsmasq-dns" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.313015 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9776a876-19db-446c-a7bf-d6fe0111d7b8" containerName="nova-scheduler-scheduler" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.313031 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f834f671-3add-4bfc-8152-596d66e90f22" containerName="nova-manage" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.313044 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="deba4978-0921-4c62-9452-9a47fe24feb7" containerName="nova-metadata-metadata" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.313058 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="deba4978-0921-4c62-9452-9a47fe24feb7" containerName="nova-metadata-log" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.314719 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.319726 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.319973 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.330899 4810 scope.go:117] "RemoveContainer" containerID="3cd69082a7ac9af779601af9a54ce467c63853d8132eb18f4aa9ff5eb2315f95" Feb 19 15:32:19 crc kubenswrapper[4810]: E0219 15:32:19.331440 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cd69082a7ac9af779601af9a54ce467c63853d8132eb18f4aa9ff5eb2315f95\": container with ID starting with 3cd69082a7ac9af779601af9a54ce467c63853d8132eb18f4aa9ff5eb2315f95 not found: ID does not exist" containerID="3cd69082a7ac9af779601af9a54ce467c63853d8132eb18f4aa9ff5eb2315f95" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.331476 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd69082a7ac9af779601af9a54ce467c63853d8132eb18f4aa9ff5eb2315f95"} err="failed to get container status \"3cd69082a7ac9af779601af9a54ce467c63853d8132eb18f4aa9ff5eb2315f95\": rpc error: code = NotFound desc = could not find container \"3cd69082a7ac9af779601af9a54ce467c63853d8132eb18f4aa9ff5eb2315f95\": container with ID starting with 3cd69082a7ac9af779601af9a54ce467c63853d8132eb18f4aa9ff5eb2315f95 not found: ID does not exist" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.331502 4810 scope.go:117] "RemoveContainer" containerID="acc612b198e2bb735041a74b3bf438f971ed76411bde0e76a3b0dd4d3a920c17" Feb 19 15:32:19 crc kubenswrapper[4810]: E0219 15:32:19.331758 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc612b198e2bb735041a74b3bf438f971ed76411bde0e76a3b0dd4d3a920c17\": container with ID starting with acc612b198e2bb735041a74b3bf438f971ed76411bde0e76a3b0dd4d3a920c17 not found: ID does not exist" containerID="acc612b198e2bb735041a74b3bf438f971ed76411bde0e76a3b0dd4d3a920c17" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.331784 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc612b198e2bb735041a74b3bf438f971ed76411bde0e76a3b0dd4d3a920c17"} err="failed to get container status \"acc612b198e2bb735041a74b3bf438f971ed76411bde0e76a3b0dd4d3a920c17\": rpc error: code = NotFound desc = could not find container \"acc612b198e2bb735041a74b3bf438f971ed76411bde0e76a3b0dd4d3a920c17\": container with ID starting with acc612b198e2bb735041a74b3bf438f971ed76411bde0e76a3b0dd4d3a920c17 not found: ID does not exist" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.340361 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.347974 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.365962 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.367247 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.370003 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.375449 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.426839 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4\") " pod="openstack/nova-scheduler-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.426910 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpmmr\" (UniqueName: \"kubernetes.io/projected/d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4-kube-api-access-wpmmr\") pod \"nova-scheduler-0\" (UID: \"d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4\") " pod="openstack/nova-scheduler-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.426947 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4-config-data\") pod \"nova-scheduler-0\" (UID: \"d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4\") " pod="openstack/nova-scheduler-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.426979 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f36ad344-e946-4221-892d-3ffe8fbdd59b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.427003 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f36ad344-e946-4221-892d-3ffe8fbdd59b-logs\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.427021 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nvfz\" (UniqueName: \"kubernetes.io/projected/f36ad344-e946-4221-892d-3ffe8fbdd59b-kube-api-access-8nvfz\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.427040 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36ad344-e946-4221-892d-3ffe8fbdd59b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.427270 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36ad344-e946-4221-892d-3ffe8fbdd59b-config-data\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.462644 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9776a876-19db-446c-a7bf-d6fe0111d7b8" path="/var/lib/kubelet/pods/9776a876-19db-446c-a7bf-d6fe0111d7b8/volumes" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.464716 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deba4978-0921-4c62-9452-9a47fe24feb7" path="/var/lib/kubelet/pods/deba4978-0921-4c62-9452-9a47fe24feb7/volumes" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.530730 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4\") " pod="openstack/nova-scheduler-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.530838 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpmmr\" (UniqueName: \"kubernetes.io/projected/d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4-kube-api-access-wpmmr\") pod \"nova-scheduler-0\" (UID: \"d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4\") " pod="openstack/nova-scheduler-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.530887 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4-config-data\") pod \"nova-scheduler-0\" (UID: \"d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4\") " pod="openstack/nova-scheduler-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.530932 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f36ad344-e946-4221-892d-3ffe8fbdd59b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.530966 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f36ad344-e946-4221-892d-3ffe8fbdd59b-logs\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.530992 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nvfz\" (UniqueName: \"kubernetes.io/projected/f36ad344-e946-4221-892d-3ffe8fbdd59b-kube-api-access-8nvfz\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.531035 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36ad344-e946-4221-892d-3ffe8fbdd59b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.531080 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36ad344-e946-4221-892d-3ffe8fbdd59b-config-data\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.533989 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f36ad344-e946-4221-892d-3ffe8fbdd59b-logs\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.535961 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4-config-data\") pod \"nova-scheduler-0\" (UID: \"d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4\") " pod="openstack/nova-scheduler-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.536061 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f36ad344-e946-4221-892d-3ffe8fbdd59b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.536527 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f36ad344-e946-4221-892d-3ffe8fbdd59b-config-data\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.537112 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.537148 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.537258 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4\") " pod="openstack/nova-scheduler-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.539706 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f36ad344-e946-4221-892d-3ffe8fbdd59b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.550522 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nvfz\" (UniqueName: \"kubernetes.io/projected/f36ad344-e946-4221-892d-3ffe8fbdd59b-kube-api-access-8nvfz\") pod \"nova-metadata-0\" (UID: \"f36ad344-e946-4221-892d-3ffe8fbdd59b\") " pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.551278 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpmmr\" (UniqueName: \"kubernetes.io/projected/d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4-kube-api-access-wpmmr\") pod \"nova-scheduler-0\" (UID: \"d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4\") " pod="openstack/nova-scheduler-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.645078 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 15:32:19 crc kubenswrapper[4810]: I0219 15:32:19.688723 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 15:32:20 crc kubenswrapper[4810]: W0219 15:32:20.153907 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf36ad344_e946_4221_892d_3ffe8fbdd59b.slice/crio-c6351edcf2679df19703a7ceb7bb534a95f811a03ffe261463e4c0399447273a WatchSource:0}: Error finding container c6351edcf2679df19703a7ceb7bb534a95f811a03ffe261463e4c0399447273a: Status 404 returned error can't find the container with id c6351edcf2679df19703a7ceb7bb534a95f811a03ffe261463e4c0399447273a Feb 19 15:32:20 crc kubenswrapper[4810]: I0219 15:32:20.155083 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 15:32:20 crc kubenswrapper[4810]: I0219 15:32:20.241830 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 15:32:20 crc kubenswrapper[4810]: I0219 15:32:20.266042 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f36ad344-e946-4221-892d-3ffe8fbdd59b","Type":"ContainerStarted","Data":"c6351edcf2679df19703a7ceb7bb534a95f811a03ffe261463e4c0399447273a"} Feb 19 15:32:20 crc kubenswrapper[4810]: W0219 15:32:20.278279 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd70a0a1b_ed2d_46f1_aeb9_a335de9b06d4.slice/crio-5b618feec84668a7233e75d2d116b1fbd6762c44348b4c0d14b88c16ba912ebc WatchSource:0}: Error finding container 5b618feec84668a7233e75d2d116b1fbd6762c44348b4c0d14b88c16ba912ebc: Status 404 returned error can't find the container with id 5b618feec84668a7233e75d2d116b1fbd6762c44348b4c0d14b88c16ba912ebc Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.004784 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.070689 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-combined-ca-bundle\") pod \"1120f3c4-1323-4ffe-8798-b15e58615278\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.070885 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfvq8\" (UniqueName: \"kubernetes.io/projected/1120f3c4-1323-4ffe-8798-b15e58615278-kube-api-access-bfvq8\") pod \"1120f3c4-1323-4ffe-8798-b15e58615278\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.070982 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1120f3c4-1323-4ffe-8798-b15e58615278-logs\") pod \"1120f3c4-1323-4ffe-8798-b15e58615278\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.071857 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1120f3c4-1323-4ffe-8798-b15e58615278-logs" (OuterVolumeSpecName: "logs") pod "1120f3c4-1323-4ffe-8798-b15e58615278" (UID: "1120f3c4-1323-4ffe-8798-b15e58615278"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.072045 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-public-tls-certs\") pod \"1120f3c4-1323-4ffe-8798-b15e58615278\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.072130 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-config-data\") pod \"1120f3c4-1323-4ffe-8798-b15e58615278\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.072174 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-internal-tls-certs\") pod \"1120f3c4-1323-4ffe-8798-b15e58615278\" (UID: \"1120f3c4-1323-4ffe-8798-b15e58615278\") " Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.072794 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1120f3c4-1323-4ffe-8798-b15e58615278-logs\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.074639 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1120f3c4-1323-4ffe-8798-b15e58615278-kube-api-access-bfvq8" (OuterVolumeSpecName: "kube-api-access-bfvq8") pod "1120f3c4-1323-4ffe-8798-b15e58615278" (UID: "1120f3c4-1323-4ffe-8798-b15e58615278"). InnerVolumeSpecName "kube-api-access-bfvq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.098664 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-config-data" (OuterVolumeSpecName: "config-data") pod "1120f3c4-1323-4ffe-8798-b15e58615278" (UID: "1120f3c4-1323-4ffe-8798-b15e58615278"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.130894 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1120f3c4-1323-4ffe-8798-b15e58615278" (UID: "1120f3c4-1323-4ffe-8798-b15e58615278"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.150666 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1120f3c4-1323-4ffe-8798-b15e58615278" (UID: "1120f3c4-1323-4ffe-8798-b15e58615278"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.152559 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1120f3c4-1323-4ffe-8798-b15e58615278" (UID: "1120f3c4-1323-4ffe-8798-b15e58615278"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.174954 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.175009 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfvq8\" (UniqueName: \"kubernetes.io/projected/1120f3c4-1323-4ffe-8798-b15e58615278-kube-api-access-bfvq8\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.175031 4810 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.175051 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.175068 4810 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1120f3c4-1323-4ffe-8798-b15e58615278-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.286154 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f36ad344-e946-4221-892d-3ffe8fbdd59b","Type":"ContainerStarted","Data":"ad3221d34446778cf5aa06a120ee3623be2cb79ccf71fa0fe0fdc8508b537b39"} Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.286215 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f36ad344-e946-4221-892d-3ffe8fbdd59b","Type":"ContainerStarted","Data":"421f31f709e42d33e806da8cbccdf634f54f5d5f42da772671dff85eade1c72b"} Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.290598 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4","Type":"ContainerStarted","Data":"755c6f02f55802e87f157919a987564dd4c473e1c99e2dd09cad9a38d6522eba"} Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.290661 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4","Type":"ContainerStarted","Data":"5b618feec84668a7233e75d2d116b1fbd6762c44348b4c0d14b88c16ba912ebc"} Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.294289 4810 generic.go:334] "Generic (PLEG): container finished" podID="1120f3c4-1323-4ffe-8798-b15e58615278" containerID="83dcf28c966753013525c88cc66000052f7461b478620e9cca2052eda9edb662" exitCode=0 Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.294377 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1120f3c4-1323-4ffe-8798-b15e58615278","Type":"ContainerDied","Data":"83dcf28c966753013525c88cc66000052f7461b478620e9cca2052eda9edb662"} Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.294426 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1120f3c4-1323-4ffe-8798-b15e58615278","Type":"ContainerDied","Data":"5c8bcb9cab03480eccfcad397fcab4db26af8c7b42bbdd88a46bf05b9ed6f929"} Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.294458 4810 scope.go:117] "RemoveContainer" containerID="83dcf28c966753013525c88cc66000052f7461b478620e9cca2052eda9edb662" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.294490 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.311762 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.311693782 podStartE2EDuration="2.311693782s" podCreationTimestamp="2026-02-19 15:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:32:21.30771741 +0000 UTC m=+1370.789747564" watchObservedRunningTime="2026-02-19 15:32:21.311693782 +0000 UTC m=+1370.793723936" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.331615 4810 scope.go:117] "RemoveContainer" containerID="bfdb849b75abf04adb5f644e93c3f65364ac03de4b6998e5eb22cf3fb4118b78" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.338179 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.338133084 podStartE2EDuration="2.338133084s" podCreationTimestamp="2026-02-19 15:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:32:21.333042395 +0000 UTC m=+1370.815072539" watchObservedRunningTime="2026-02-19 15:32:21.338133084 +0000 UTC m=+1370.820163208" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.367905 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.374129 4810 scope.go:117] "RemoveContainer" containerID="83dcf28c966753013525c88cc66000052f7461b478620e9cca2052eda9edb662" Feb 19 15:32:21 crc kubenswrapper[4810]: E0219 15:32:21.374775 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83dcf28c966753013525c88cc66000052f7461b478620e9cca2052eda9edb662\": container with ID starting with 83dcf28c966753013525c88cc66000052f7461b478620e9cca2052eda9edb662 not found: ID does not exist" containerID="83dcf28c966753013525c88cc66000052f7461b478620e9cca2052eda9edb662" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.374814 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83dcf28c966753013525c88cc66000052f7461b478620e9cca2052eda9edb662"} err="failed to get container status \"83dcf28c966753013525c88cc66000052f7461b478620e9cca2052eda9edb662\": rpc error: code = NotFound desc = could not find container \"83dcf28c966753013525c88cc66000052f7461b478620e9cca2052eda9edb662\": container with ID starting with 83dcf28c966753013525c88cc66000052f7461b478620e9cca2052eda9edb662 not found: ID does not exist" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.374843 4810 scope.go:117] "RemoveContainer" containerID="bfdb849b75abf04adb5f644e93c3f65364ac03de4b6998e5eb22cf3fb4118b78" Feb 19 15:32:21 crc kubenswrapper[4810]: E0219 15:32:21.375539 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfdb849b75abf04adb5f644e93c3f65364ac03de4b6998e5eb22cf3fb4118b78\": container with ID starting with bfdb849b75abf04adb5f644e93c3f65364ac03de4b6998e5eb22cf3fb4118b78 not found: ID does not exist" containerID="bfdb849b75abf04adb5f644e93c3f65364ac03de4b6998e5eb22cf3fb4118b78" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.375593 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfdb849b75abf04adb5f644e93c3f65364ac03de4b6998e5eb22cf3fb4118b78"} err="failed to get container status \"bfdb849b75abf04adb5f644e93c3f65364ac03de4b6998e5eb22cf3fb4118b78\": rpc error: code = NotFound desc = could not find container \"bfdb849b75abf04adb5f644e93c3f65364ac03de4b6998e5eb22cf3fb4118b78\": container with ID starting with bfdb849b75abf04adb5f644e93c3f65364ac03de4b6998e5eb22cf3fb4118b78 not found: ID does not exist" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.385612 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.394230 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 15:32:21 crc kubenswrapper[4810]: E0219 15:32:21.394748 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1120f3c4-1323-4ffe-8798-b15e58615278" containerName="nova-api-api" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.394771 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1120f3c4-1323-4ffe-8798-b15e58615278" containerName="nova-api-api" Feb 19 15:32:21 crc kubenswrapper[4810]: E0219 15:32:21.394793 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1120f3c4-1323-4ffe-8798-b15e58615278" containerName="nova-api-log" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.394802 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1120f3c4-1323-4ffe-8798-b15e58615278" containerName="nova-api-log" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.395061 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1120f3c4-1323-4ffe-8798-b15e58615278" containerName="nova-api-log" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.395111 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1120f3c4-1323-4ffe-8798-b15e58615278" containerName="nova-api-api" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.396384 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.406439 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.406533 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.406654 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.409118 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.453931 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1120f3c4-1323-4ffe-8798-b15e58615278" path="/var/lib/kubelet/pods/1120f3c4-1323-4ffe-8798-b15e58615278/volumes" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.480366 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6397af05-d030-46c2-8a0f-a90beb9b2502-logs\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.480416 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6397af05-d030-46c2-8a0f-a90beb9b2502-public-tls-certs\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.480507 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6397af05-d030-46c2-8a0f-a90beb9b2502-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.480597 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mntt5\" (UniqueName: \"kubernetes.io/projected/6397af05-d030-46c2-8a0f-a90beb9b2502-kube-api-access-mntt5\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.481041 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6397af05-d030-46c2-8a0f-a90beb9b2502-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.481187 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6397af05-d030-46c2-8a0f-a90beb9b2502-config-data\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.583648 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6397af05-d030-46c2-8a0f-a90beb9b2502-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.583736 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6397af05-d030-46c2-8a0f-a90beb9b2502-config-data\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.583864 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6397af05-d030-46c2-8a0f-a90beb9b2502-logs\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.583904 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6397af05-d030-46c2-8a0f-a90beb9b2502-public-tls-certs\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.583988 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6397af05-d030-46c2-8a0f-a90beb9b2502-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.584049 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mntt5\" (UniqueName: \"kubernetes.io/projected/6397af05-d030-46c2-8a0f-a90beb9b2502-kube-api-access-mntt5\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.584920 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6397af05-d030-46c2-8a0f-a90beb9b2502-logs\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.589209 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6397af05-d030-46c2-8a0f-a90beb9b2502-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.590208 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6397af05-d030-46c2-8a0f-a90beb9b2502-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.590635 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6397af05-d030-46c2-8a0f-a90beb9b2502-config-data\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.590866 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6397af05-d030-46c2-8a0f-a90beb9b2502-public-tls-certs\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.605440 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mntt5\" (UniqueName: \"kubernetes.io/projected/6397af05-d030-46c2-8a0f-a90beb9b2502-kube-api-access-mntt5\") pod \"nova-api-0\" (UID: \"6397af05-d030-46c2-8a0f-a90beb9b2502\") " pod="openstack/nova-api-0" Feb 19 15:32:21 crc kubenswrapper[4810]: I0219 15:32:21.721798 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 15:32:22 crc kubenswrapper[4810]: W0219 15:32:22.282751 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6397af05_d030_46c2_8a0f_a90beb9b2502.slice/crio-8c2c929b825b8c0b7372b6797ac977dd0218a672d0b6cec53c61516afedfcfd6 WatchSource:0}: Error finding container 8c2c929b825b8c0b7372b6797ac977dd0218a672d0b6cec53c61516afedfcfd6: Status 404 returned error can't find the container with id 8c2c929b825b8c0b7372b6797ac977dd0218a672d0b6cec53c61516afedfcfd6 Feb 19 15:32:22 crc kubenswrapper[4810]: I0219 15:32:22.283015 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 15:32:22 crc kubenswrapper[4810]: I0219 15:32:22.304311 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6397af05-d030-46c2-8a0f-a90beb9b2502","Type":"ContainerStarted","Data":"8c2c929b825b8c0b7372b6797ac977dd0218a672d0b6cec53c61516afedfcfd6"} Feb 19 15:32:23 crc kubenswrapper[4810]: I0219 15:32:23.317194 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6397af05-d030-46c2-8a0f-a90beb9b2502","Type":"ContainerStarted","Data":"d126d183ccc2619d372f4dc3e6c72b3581ce487d207925dee34a82886e30e28f"} Feb 19 15:32:23 crc kubenswrapper[4810]: I0219 15:32:23.317740 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6397af05-d030-46c2-8a0f-a90beb9b2502","Type":"ContainerStarted","Data":"c90a5d8710584e4feb60b33d824e1b128654f406aa3390d69408f6d9fa4cb765"} Feb 19 15:32:23 crc kubenswrapper[4810]: I0219 15:32:23.353245 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.353218548 podStartE2EDuration="2.353218548s" podCreationTimestamp="2026-02-19 15:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:32:23.345255015 +0000 UTC m=+1372.827285179" watchObservedRunningTime="2026-02-19 15:32:23.353218548 +0000 UTC m=+1372.835248702" Feb 19 15:32:24 crc kubenswrapper[4810]: I0219 15:32:24.645483 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 15:32:24 crc kubenswrapper[4810]: I0219 15:32:24.645836 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 15:32:24 crc kubenswrapper[4810]: I0219 15:32:24.688894 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 15:32:29 crc kubenswrapper[4810]: I0219 15:32:29.645565 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 15:32:29 crc kubenswrapper[4810]: I0219 15:32:29.646282 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 15:32:29 crc kubenswrapper[4810]: I0219 15:32:29.689068 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 15:32:29 crc kubenswrapper[4810]: I0219 15:32:29.721796 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 15:32:30 crc kubenswrapper[4810]: I0219 15:32:30.455040 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 15:32:30 crc kubenswrapper[4810]: I0219 15:32:30.658488 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f36ad344-e946-4221-892d-3ffe8fbdd59b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.231:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 15:32:30 crc kubenswrapper[4810]: I0219 15:32:30.658506 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f36ad344-e946-4221-892d-3ffe8fbdd59b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.231:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 15:32:31 crc kubenswrapper[4810]: I0219 15:32:31.722547 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 15:32:31 crc kubenswrapper[4810]: I0219 15:32:31.722938 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 15:32:32 crc kubenswrapper[4810]: I0219 15:32:32.736591 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6397af05-d030-46c2-8a0f-a90beb9b2502" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.233:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 15:32:32 crc kubenswrapper[4810]: I0219 15:32:32.736565 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6397af05-d030-46c2-8a0f-a90beb9b2502" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.233:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 15:32:36 crc kubenswrapper[4810]: I0219 15:32:36.466641 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 15:32:39 crc kubenswrapper[4810]: I0219 15:32:39.075706 4810 scope.go:117] "RemoveContainer" containerID="1d5f88ecf4c81e410f0df90ab60b4889433c0df9a3a9c3d46a0ad0dad5a5c6f9" Feb 19 15:32:39 crc kubenswrapper[4810]: I0219 15:32:39.111530 4810 scope.go:117] "RemoveContainer" containerID="5cab9d89ebbb4715343c797d52130d009456c8d3d9eaf887c80836933b581c07" Feb 19 15:32:39 crc kubenswrapper[4810]: I0219 15:32:39.144657 4810 scope.go:117] "RemoveContainer" containerID="a21841d08c3392c62d7d0123a3228f5e73d4d173558fa34eefb59a0035628f39" Feb 19 15:32:39 crc kubenswrapper[4810]: I0219 15:32:39.651380 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 15:32:39 crc kubenswrapper[4810]: I0219 15:32:39.655442 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 15:32:39 crc kubenswrapper[4810]: I0219 15:32:39.658006 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 15:32:40 crc kubenswrapper[4810]: I0219 15:32:40.546675 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 15:32:41 crc kubenswrapper[4810]: I0219 15:32:41.735222 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 15:32:41 crc kubenswrapper[4810]: I0219 15:32:41.736600 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 15:32:41 crc kubenswrapper[4810]: I0219 15:32:41.743660 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 15:32:41 crc kubenswrapper[4810]: I0219 15:32:41.753643 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 15:32:42 crc kubenswrapper[4810]: I0219 15:32:42.566412 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 15:32:42 crc kubenswrapper[4810]: I0219 15:32:42.579103 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 15:32:49 crc kubenswrapper[4810]: I0219 15:32:49.538173 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:32:49 crc kubenswrapper[4810]: I0219 15:32:49.538911 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:32:49 crc kubenswrapper[4810]: I0219 15:32:49.538975 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:32:49 crc kubenswrapper[4810]: I0219 15:32:49.539950 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2979159f3325af188cf73d374cfc4f7b1a64cb0be10361454a84d92914ce8075"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:32:49 crc kubenswrapper[4810]: I0219 15:32:49.540047 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://2979159f3325af188cf73d374cfc4f7b1a64cb0be10361454a84d92914ce8075" gracePeriod=600 Feb 19 15:32:50 crc kubenswrapper[4810]: I0219 15:32:50.660373 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="2979159f3325af188cf73d374cfc4f7b1a64cb0be10361454a84d92914ce8075" exitCode=0 Feb 19 15:32:50 crc kubenswrapper[4810]: I0219 15:32:50.660476 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"2979159f3325af188cf73d374cfc4f7b1a64cb0be10361454a84d92914ce8075"} Feb 19 15:32:50 crc kubenswrapper[4810]: I0219 15:32:50.660950 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a"} Feb 19 15:32:50 crc kubenswrapper[4810]: I0219 15:32:50.660973 4810 scope.go:117] "RemoveContainer" containerID="37fe95e370faa9fca4a69499713730a8ba7e7939f57cd237ea9a505f9b09a6bf" Feb 19 15:32:51 crc kubenswrapper[4810]: I0219 15:32:51.229657 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.354514 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.591402 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-955rr"] Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.593311 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.616252 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-955rr"] Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.649971 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783f4f81-b0fb-49c4-9b07-a2715641355a-utilities\") pod \"redhat-operators-955rr\" (UID: \"783f4f81-b0fb-49c4-9b07-a2715641355a\") " pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.650024 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783f4f81-b0fb-49c4-9b07-a2715641355a-catalog-content\") pod \"redhat-operators-955rr\" (UID: \"783f4f81-b0fb-49c4-9b07-a2715641355a\") " pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.650115 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrjnp\" (UniqueName: \"kubernetes.io/projected/783f4f81-b0fb-49c4-9b07-a2715641355a-kube-api-access-hrjnp\") pod \"redhat-operators-955rr\" (UID: \"783f4f81-b0fb-49c4-9b07-a2715641355a\") " pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.751570 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783f4f81-b0fb-49c4-9b07-a2715641355a-utilities\") pod \"redhat-operators-955rr\" (UID: \"783f4f81-b0fb-49c4-9b07-a2715641355a\") " pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.751611 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783f4f81-b0fb-49c4-9b07-a2715641355a-catalog-content\") pod \"redhat-operators-955rr\" (UID: \"783f4f81-b0fb-49c4-9b07-a2715641355a\") " pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.751643 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrjnp\" (UniqueName: \"kubernetes.io/projected/783f4f81-b0fb-49c4-9b07-a2715641355a-kube-api-access-hrjnp\") pod \"redhat-operators-955rr\" (UID: \"783f4f81-b0fb-49c4-9b07-a2715641355a\") " pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.752056 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783f4f81-b0fb-49c4-9b07-a2715641355a-utilities\") pod \"redhat-operators-955rr\" (UID: \"783f4f81-b0fb-49c4-9b07-a2715641355a\") " pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.752121 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783f4f81-b0fb-49c4-9b07-a2715641355a-catalog-content\") pod \"redhat-operators-955rr\" (UID: \"783f4f81-b0fb-49c4-9b07-a2715641355a\") " pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.780885 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrjnp\" (UniqueName: \"kubernetes.io/projected/783f4f81-b0fb-49c4-9b07-a2715641355a-kube-api-access-hrjnp\") pod \"redhat-operators-955rr\" (UID: \"783f4f81-b0fb-49c4-9b07-a2715641355a\") " pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:32:52 crc kubenswrapper[4810]: I0219 15:32:52.921785 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:32:53 crc kubenswrapper[4810]: W0219 15:32:53.324734 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod783f4f81_b0fb_49c4_9b07_a2715641355a.slice/crio-75d6325f144c755c14a8b6930d4d42def83ced3289d5cc9615e327af0d069534 WatchSource:0}: Error finding container 75d6325f144c755c14a8b6930d4d42def83ced3289d5cc9615e327af0d069534: Status 404 returned error can't find the container with id 75d6325f144c755c14a8b6930d4d42def83ced3289d5cc9615e327af0d069534 Feb 19 15:32:53 crc kubenswrapper[4810]: I0219 15:32:53.333948 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-955rr"] Feb 19 15:32:53 crc kubenswrapper[4810]: I0219 15:32:53.700875 4810 generic.go:334] "Generic (PLEG): container finished" podID="783f4f81-b0fb-49c4-9b07-a2715641355a" containerID="761f02a235bc3255020cf4a80c96bbf0e5db299cb80ff40640ab5d09947f186d" exitCode=0 Feb 19 15:32:53 crc kubenswrapper[4810]: I0219 15:32:53.700931 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-955rr" event={"ID":"783f4f81-b0fb-49c4-9b07-a2715641355a","Type":"ContainerDied","Data":"761f02a235bc3255020cf4a80c96bbf0e5db299cb80ff40640ab5d09947f186d"} Feb 19 15:32:53 crc kubenswrapper[4810]: I0219 15:32:53.700956 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-955rr" event={"ID":"783f4f81-b0fb-49c4-9b07-a2715641355a","Type":"ContainerStarted","Data":"75d6325f144c755c14a8b6930d4d42def83ced3289d5cc9615e327af0d069534"} Feb 19 15:32:54 crc kubenswrapper[4810]: I0219 15:32:54.920230 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="00bcfb03-4357-4343-99a5-30dc7f25abe9" containerName="rabbitmq" containerID="cri-o://c5b200cb5f29f71ee660d52a8a35bdae1c6f7de01b68b55aa0a763c8a2cc371f" gracePeriod=604797 Feb 19 15:32:55 crc kubenswrapper[4810]: I0219 15:32:55.743309 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-955rr" event={"ID":"783f4f81-b0fb-49c4-9b07-a2715641355a","Type":"ContainerStarted","Data":"6bc2e1d7197343d97aad8e795e58bf89ffecf5750c4972eb72fee6305a410fbc"} Feb 19 15:32:56 crc kubenswrapper[4810]: I0219 15:32:56.031544 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2a3676ed-f06f-4dea-82a1-959716331113" containerName="rabbitmq" containerID="cri-o://dbc27193c644946d801635eac3f4ac8fd7e93e4ac9ea0ee7cb027f5cec7cc199" gracePeriod=604797 Feb 19 15:32:56 crc kubenswrapper[4810]: I0219 15:32:56.132894 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="00bcfb03-4357-4343-99a5-30dc7f25abe9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 19 15:32:56 crc kubenswrapper[4810]: I0219 15:32:56.443011 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2a3676ed-f06f-4dea-82a1-959716331113" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 19 15:32:57 crc kubenswrapper[4810]: I0219 15:32:57.766684 4810 generic.go:334] "Generic (PLEG): container finished" podID="783f4f81-b0fb-49c4-9b07-a2715641355a" containerID="6bc2e1d7197343d97aad8e795e58bf89ffecf5750c4972eb72fee6305a410fbc" exitCode=0 Feb 19 15:32:57 crc kubenswrapper[4810]: I0219 15:32:57.766753 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-955rr" event={"ID":"783f4f81-b0fb-49c4-9b07-a2715641355a","Type":"ContainerDied","Data":"6bc2e1d7197343d97aad8e795e58bf89ffecf5750c4972eb72fee6305a410fbc"} Feb 19 15:32:57 crc kubenswrapper[4810]: I0219 15:32:57.768540 4810 generic.go:334] "Generic (PLEG): container finished" podID="00bcfb03-4357-4343-99a5-30dc7f25abe9" containerID="c5b200cb5f29f71ee660d52a8a35bdae1c6f7de01b68b55aa0a763c8a2cc371f" exitCode=0 Feb 19 15:32:57 crc kubenswrapper[4810]: I0219 15:32:57.768572 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"00bcfb03-4357-4343-99a5-30dc7f25abe9","Type":"ContainerDied","Data":"c5b200cb5f29f71ee660d52a8a35bdae1c6f7de01b68b55aa0a763c8a2cc371f"} Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.700190 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.778586 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-confd\") pod \"00bcfb03-4357-4343-99a5-30dc7f25abe9\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.778632 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/00bcfb03-4357-4343-99a5-30dc7f25abe9-erlang-cookie-secret\") pod \"00bcfb03-4357-4343-99a5-30dc7f25abe9\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.778692 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/00bcfb03-4357-4343-99a5-30dc7f25abe9-pod-info\") pod \"00bcfb03-4357-4343-99a5-30dc7f25abe9\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.778724 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-config-data\") pod \"00bcfb03-4357-4343-99a5-30dc7f25abe9\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.778836 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"00bcfb03-4357-4343-99a5-30dc7f25abe9\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.779517 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-plugins-conf\") pod \"00bcfb03-4357-4343-99a5-30dc7f25abe9\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.779567 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-server-conf\") pod \"00bcfb03-4357-4343-99a5-30dc7f25abe9\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.779649 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74vq8\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-kube-api-access-74vq8\") pod \"00bcfb03-4357-4343-99a5-30dc7f25abe9\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.779682 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-tls\") pod \"00bcfb03-4357-4343-99a5-30dc7f25abe9\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.779762 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-plugins\") pod \"00bcfb03-4357-4343-99a5-30dc7f25abe9\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.779836 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-erlang-cookie\") pod \"00bcfb03-4357-4343-99a5-30dc7f25abe9\" (UID: \"00bcfb03-4357-4343-99a5-30dc7f25abe9\") " Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.782250 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "00bcfb03-4357-4343-99a5-30dc7f25abe9" (UID: "00bcfb03-4357-4343-99a5-30dc7f25abe9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.783833 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "00bcfb03-4357-4343-99a5-30dc7f25abe9" (UID: "00bcfb03-4357-4343-99a5-30dc7f25abe9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.784168 4810 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.784201 4810 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.787380 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "00bcfb03-4357-4343-99a5-30dc7f25abe9" (UID: "00bcfb03-4357-4343-99a5-30dc7f25abe9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.790309 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "00bcfb03-4357-4343-99a5-30dc7f25abe9" (UID: "00bcfb03-4357-4343-99a5-30dc7f25abe9"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.794278 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/00bcfb03-4357-4343-99a5-30dc7f25abe9-pod-info" (OuterVolumeSpecName: "pod-info") pod "00bcfb03-4357-4343-99a5-30dc7f25abe9" (UID: "00bcfb03-4357-4343-99a5-30dc7f25abe9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.794796 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-955rr" event={"ID":"783f4f81-b0fb-49c4-9b07-a2715641355a","Type":"ContainerStarted","Data":"1a4461610ed2fb313a8e62f0315dc40b8ce3a3e6a0f3690df259b0b16d63375f"} Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.814585 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00bcfb03-4357-4343-99a5-30dc7f25abe9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "00bcfb03-4357-4343-99a5-30dc7f25abe9" (UID: "00bcfb03-4357-4343-99a5-30dc7f25abe9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.826086 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "00bcfb03-4357-4343-99a5-30dc7f25abe9" (UID: "00bcfb03-4357-4343-99a5-30dc7f25abe9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.826464 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"00bcfb03-4357-4343-99a5-30dc7f25abe9","Type":"ContainerDied","Data":"438e5fcabfeda7b104ccc004754827e00367d2ec7bbb19edfefdf5cb049ee1ce"} Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.826567 4810 scope.go:117] "RemoveContainer" containerID="c5b200cb5f29f71ee660d52a8a35bdae1c6f7de01b68b55aa0a763c8a2cc371f" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.826776 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.827598 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-955rr" podStartSLOduration=2.179152827 podStartE2EDuration="6.827583885s" podCreationTimestamp="2026-02-19 15:32:52 +0000 UTC" firstStartedPulling="2026-02-19 15:32:53.703532895 +0000 UTC m=+1403.185563019" lastFinishedPulling="2026-02-19 15:32:58.351963953 +0000 UTC m=+1407.833994077" observedRunningTime="2026-02-19 15:32:58.827157084 +0000 UTC m=+1408.309187218" watchObservedRunningTime="2026-02-19 15:32:58.827583885 +0000 UTC m=+1408.309614009" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.843741 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-kube-api-access-74vq8" (OuterVolumeSpecName: "kube-api-access-74vq8") pod "00bcfb03-4357-4343-99a5-30dc7f25abe9" (UID: "00bcfb03-4357-4343-99a5-30dc7f25abe9"). InnerVolumeSpecName "kube-api-access-74vq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.867765 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-config-data" (OuterVolumeSpecName: "config-data") pod "00bcfb03-4357-4343-99a5-30dc7f25abe9" (UID: "00bcfb03-4357-4343-99a5-30dc7f25abe9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.886586 4810 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.886626 4810 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/00bcfb03-4357-4343-99a5-30dc7f25abe9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.886639 4810 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/00bcfb03-4357-4343-99a5-30dc7f25abe9-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.886651 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.886676 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.886689 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74vq8\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-kube-api-access-74vq8\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.886701 4810 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.906900 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-server-conf" (OuterVolumeSpecName: "server-conf") pod "00bcfb03-4357-4343-99a5-30dc7f25abe9" (UID: "00bcfb03-4357-4343-99a5-30dc7f25abe9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.933075 4810 scope.go:117] "RemoveContainer" containerID="5f65c0deba7b3077c5501137f00e319288d66ec1245a0e431539e6d1d5d3d67c" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.945218 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.997839 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:58 crc kubenswrapper[4810]: I0219 15:32:58.997963 4810 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/00bcfb03-4357-4343-99a5-30dc7f25abe9-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.010514 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "00bcfb03-4357-4343-99a5-30dc7f25abe9" (UID: "00bcfb03-4357-4343-99a5-30dc7f25abe9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.103858 4810 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/00bcfb03-4357-4343-99a5-30dc7f25abe9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.168563 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.178561 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.204184 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 15:32:59 crc kubenswrapper[4810]: E0219 15:32:59.204848 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00bcfb03-4357-4343-99a5-30dc7f25abe9" containerName="setup-container" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.204866 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="00bcfb03-4357-4343-99a5-30dc7f25abe9" containerName="setup-container" Feb 19 15:32:59 crc kubenswrapper[4810]: E0219 15:32:59.204893 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00bcfb03-4357-4343-99a5-30dc7f25abe9" containerName="rabbitmq" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.204900 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="00bcfb03-4357-4343-99a5-30dc7f25abe9" containerName="rabbitmq" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.205130 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="00bcfb03-4357-4343-99a5-30dc7f25abe9" containerName="rabbitmq" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.206275 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.212688 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.212948 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.213104 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9r5f7" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.213125 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.213135 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.213193 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.215677 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.240866 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.307078 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b86448c3-669a-4132-b8ab-4db06347fa10-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.307126 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b86448c3-669a-4132-b8ab-4db06347fa10-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.307151 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b86448c3-669a-4132-b8ab-4db06347fa10-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.307193 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b86448c3-669a-4132-b8ab-4db06347fa10-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.307220 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b86448c3-669a-4132-b8ab-4db06347fa10-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.307235 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7ct7\" (UniqueName: \"kubernetes.io/projected/b86448c3-669a-4132-b8ab-4db06347fa10-kube-api-access-j7ct7\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.307286 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b86448c3-669a-4132-b8ab-4db06347fa10-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.307334 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b86448c3-669a-4132-b8ab-4db06347fa10-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.307376 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.307390 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b86448c3-669a-4132-b8ab-4db06347fa10-config-data\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.307414 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b86448c3-669a-4132-b8ab-4db06347fa10-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.348362 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.409277 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-erlang-cookie\") pod \"2a3676ed-f06f-4dea-82a1-959716331113\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.409454 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-plugins\") pod \"2a3676ed-f06f-4dea-82a1-959716331113\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.409501 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtmb8\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-kube-api-access-wtmb8\") pod \"2a3676ed-f06f-4dea-82a1-959716331113\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.409569 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-config-data\") pod \"2a3676ed-f06f-4dea-82a1-959716331113\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.409618 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-server-conf\") pod \"2a3676ed-f06f-4dea-82a1-959716331113\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.409667 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"2a3676ed-f06f-4dea-82a1-959716331113\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.409722 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-tls\") pod \"2a3676ed-f06f-4dea-82a1-959716331113\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.409746 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a3676ed-f06f-4dea-82a1-959716331113-erlang-cookie-secret\") pod \"2a3676ed-f06f-4dea-82a1-959716331113\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.409817 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-confd\") pod \"2a3676ed-f06f-4dea-82a1-959716331113\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.409842 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a3676ed-f06f-4dea-82a1-959716331113-pod-info\") pod \"2a3676ed-f06f-4dea-82a1-959716331113\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.409900 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-plugins-conf\") pod \"2a3676ed-f06f-4dea-82a1-959716331113\" (UID: \"2a3676ed-f06f-4dea-82a1-959716331113\") " Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410001 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2a3676ed-f06f-4dea-82a1-959716331113" (UID: "2a3676ed-f06f-4dea-82a1-959716331113"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410165 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b86448c3-669a-4132-b8ab-4db06347fa10-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410206 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b86448c3-669a-4132-b8ab-4db06347fa10-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410244 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b86448c3-669a-4132-b8ab-4db06347fa10-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410301 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b86448c3-669a-4132-b8ab-4db06347fa10-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410364 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b86448c3-669a-4132-b8ab-4db06347fa10-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410392 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7ct7\" (UniqueName: \"kubernetes.io/projected/b86448c3-669a-4132-b8ab-4db06347fa10-kube-api-access-j7ct7\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410453 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b86448c3-669a-4132-b8ab-4db06347fa10-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410493 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b86448c3-669a-4132-b8ab-4db06347fa10-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410554 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410575 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b86448c3-669a-4132-b8ab-4db06347fa10-config-data\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410559 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2a3676ed-f06f-4dea-82a1-959716331113" (UID: "2a3676ed-f06f-4dea-82a1-959716331113"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410609 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b86448c3-669a-4132-b8ab-4db06347fa10-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410876 4810 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.410890 4810 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.411701 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.413231 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b86448c3-669a-4132-b8ab-4db06347fa10-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.414317 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b86448c3-669a-4132-b8ab-4db06347fa10-config-data\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.415488 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2a3676ed-f06f-4dea-82a1-959716331113" (UID: "2a3676ed-f06f-4dea-82a1-959716331113"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.416105 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b86448c3-669a-4132-b8ab-4db06347fa10-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.416397 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b86448c3-669a-4132-b8ab-4db06347fa10-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.417009 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b86448c3-669a-4132-b8ab-4db06347fa10-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.420347 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3676ed-f06f-4dea-82a1-959716331113-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2a3676ed-f06f-4dea-82a1-959716331113" (UID: "2a3676ed-f06f-4dea-82a1-959716331113"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.420715 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2a3676ed-f06f-4dea-82a1-959716331113-pod-info" (OuterVolumeSpecName: "pod-info") pod "2a3676ed-f06f-4dea-82a1-959716331113" (UID: "2a3676ed-f06f-4dea-82a1-959716331113"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.421747 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b86448c3-669a-4132-b8ab-4db06347fa10-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.424868 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-kube-api-access-wtmb8" (OuterVolumeSpecName: "kube-api-access-wtmb8") pod "2a3676ed-f06f-4dea-82a1-959716331113" (UID: "2a3676ed-f06f-4dea-82a1-959716331113"). InnerVolumeSpecName "kube-api-access-wtmb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.425649 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b86448c3-669a-4132-b8ab-4db06347fa10-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.426031 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b86448c3-669a-4132-b8ab-4db06347fa10-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.429994 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b86448c3-669a-4132-b8ab-4db06347fa10-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.435874 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "2a3676ed-f06f-4dea-82a1-959716331113" (UID: "2a3676ed-f06f-4dea-82a1-959716331113"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.450542 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2a3676ed-f06f-4dea-82a1-959716331113" (UID: "2a3676ed-f06f-4dea-82a1-959716331113"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.453295 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00bcfb03-4357-4343-99a5-30dc7f25abe9" path="/var/lib/kubelet/pods/00bcfb03-4357-4343-99a5-30dc7f25abe9/volumes" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.470800 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7ct7\" (UniqueName: \"kubernetes.io/projected/b86448c3-669a-4132-b8ab-4db06347fa10-kube-api-access-j7ct7\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.512433 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"b86448c3-669a-4132-b8ab-4db06347fa10\") " pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.516806 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-config-data" (OuterVolumeSpecName: "config-data") pod "2a3676ed-f06f-4dea-82a1-959716331113" (UID: "2a3676ed-f06f-4dea-82a1-959716331113"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.517574 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtmb8\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-kube-api-access-wtmb8\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.518211 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.518236 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.518247 4810 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.518258 4810 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a3676ed-f06f-4dea-82a1-959716331113-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.518266 4810 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a3676ed-f06f-4dea-82a1-959716331113-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.518275 4810 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.542015 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-server-conf" (OuterVolumeSpecName: "server-conf") pod "2a3676ed-f06f-4dea-82a1-959716331113" (UID: "2a3676ed-f06f-4dea-82a1-959716331113"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.542693 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.551017 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.617291 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2a3676ed-f06f-4dea-82a1-959716331113" (UID: "2a3676ed-f06f-4dea-82a1-959716331113"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.620155 4810 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a3676ed-f06f-4dea-82a1-959716331113-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.620254 4810 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a3676ed-f06f-4dea-82a1-959716331113-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.620308 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.839445 4810 generic.go:334] "Generic (PLEG): container finished" podID="2a3676ed-f06f-4dea-82a1-959716331113" containerID="dbc27193c644946d801635eac3f4ac8fd7e93e4ac9ea0ee7cb027f5cec7cc199" exitCode=0 Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.839507 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.839529 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a3676ed-f06f-4dea-82a1-959716331113","Type":"ContainerDied","Data":"dbc27193c644946d801635eac3f4ac8fd7e93e4ac9ea0ee7cb027f5cec7cc199"} Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.839896 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a3676ed-f06f-4dea-82a1-959716331113","Type":"ContainerDied","Data":"65b3bafec6c943249f491880afc4a9c1d426f050515e89a7aeb5d3ee771259c3"} Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.839913 4810 scope.go:117] "RemoveContainer" containerID="dbc27193c644946d801635eac3f4ac8fd7e93e4ac9ea0ee7cb027f5cec7cc199" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.881406 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.883564 4810 scope.go:117] "RemoveContainer" containerID="d8e4585cb9bc557ddc8d8dc697ba266361d794bec388e11f6081c9ec077a1726" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.919613 4810 scope.go:117] "RemoveContainer" containerID="dbc27193c644946d801635eac3f4ac8fd7e93e4ac9ea0ee7cb027f5cec7cc199" Feb 19 15:32:59 crc kubenswrapper[4810]: E0219 15:32:59.922957 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbc27193c644946d801635eac3f4ac8fd7e93e4ac9ea0ee7cb027f5cec7cc199\": container with ID starting with dbc27193c644946d801635eac3f4ac8fd7e93e4ac9ea0ee7cb027f5cec7cc199 not found: ID does not exist" containerID="dbc27193c644946d801635eac3f4ac8fd7e93e4ac9ea0ee7cb027f5cec7cc199" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.923090 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbc27193c644946d801635eac3f4ac8fd7e93e4ac9ea0ee7cb027f5cec7cc199"} err="failed to get container status \"dbc27193c644946d801635eac3f4ac8fd7e93e4ac9ea0ee7cb027f5cec7cc199\": rpc error: code = NotFound desc = could not find container \"dbc27193c644946d801635eac3f4ac8fd7e93e4ac9ea0ee7cb027f5cec7cc199\": container with ID starting with dbc27193c644946d801635eac3f4ac8fd7e93e4ac9ea0ee7cb027f5cec7cc199 not found: ID does not exist" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.923226 4810 scope.go:117] "RemoveContainer" containerID="d8e4585cb9bc557ddc8d8dc697ba266361d794bec388e11f6081c9ec077a1726" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.923405 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 15:32:59 crc kubenswrapper[4810]: E0219 15:32:59.923932 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8e4585cb9bc557ddc8d8dc697ba266361d794bec388e11f6081c9ec077a1726\": container with ID starting with d8e4585cb9bc557ddc8d8dc697ba266361d794bec388e11f6081c9ec077a1726 not found: ID does not exist" containerID="d8e4585cb9bc557ddc8d8dc697ba266361d794bec388e11f6081c9ec077a1726" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.924139 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e4585cb9bc557ddc8d8dc697ba266361d794bec388e11f6081c9ec077a1726"} err="failed to get container status \"d8e4585cb9bc557ddc8d8dc697ba266361d794bec388e11f6081c9ec077a1726\": rpc error: code = NotFound desc = could not find container \"d8e4585cb9bc557ddc8d8dc697ba266361d794bec388e11f6081c9ec077a1726\": container with ID starting with d8e4585cb9bc557ddc8d8dc697ba266361d794bec388e11f6081c9ec077a1726 not found: ID does not exist" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.953160 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 15:32:59 crc kubenswrapper[4810]: E0219 15:32:59.964464 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3676ed-f06f-4dea-82a1-959716331113" containerName="setup-container" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.964495 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3676ed-f06f-4dea-82a1-959716331113" containerName="setup-container" Feb 19 15:32:59 crc kubenswrapper[4810]: E0219 15:32:59.964512 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3676ed-f06f-4dea-82a1-959716331113" containerName="rabbitmq" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.964518 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3676ed-f06f-4dea-82a1-959716331113" containerName="rabbitmq" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.964780 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3676ed-f06f-4dea-82a1-959716331113" containerName="rabbitmq" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.965851 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.967858 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.968029 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.968183 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.969073 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.969252 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.969376 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.969608 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qxvfg" Feb 19 15:32:59 crc kubenswrapper[4810]: I0219 15:32:59.990128 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.051776 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.052292 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03247cdb-4055-4d47-b433-848e363768ab-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.052396 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03247cdb-4055-4d47-b433-848e363768ab-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.052446 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03247cdb-4055-4d47-b433-848e363768ab-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.052476 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.052504 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03247cdb-4055-4d47-b433-848e363768ab-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.052534 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58z2t\" (UniqueName: \"kubernetes.io/projected/03247cdb-4055-4d47-b433-848e363768ab-kube-api-access-58z2t\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.052555 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03247cdb-4055-4d47-b433-848e363768ab-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.052577 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03247cdb-4055-4d47-b433-848e363768ab-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.052615 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03247cdb-4055-4d47-b433-848e363768ab-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.052635 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03247cdb-4055-4d47-b433-848e363768ab-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.052666 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03247cdb-4055-4d47-b433-848e363768ab-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: W0219 15:33:00.053792 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb86448c3_669a_4132_b8ab_4db06347fa10.slice/crio-399aaf0177600c18b47a56dbdca5d87c12cbf5e9ea295a08327d01e27b1b0566 WatchSource:0}: Error finding container 399aaf0177600c18b47a56dbdca5d87c12cbf5e9ea295a08327d01e27b1b0566: Status 404 returned error can't find the container with id 399aaf0177600c18b47a56dbdca5d87c12cbf5e9ea295a08327d01e27b1b0566 Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.154706 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03247cdb-4055-4d47-b433-848e363768ab-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.155079 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03247cdb-4055-4d47-b433-848e363768ab-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.155123 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03247cdb-4055-4d47-b433-848e363768ab-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.155240 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03247cdb-4055-4d47-b433-848e363768ab-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.155276 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03247cdb-4055-4d47-b433-848e363768ab-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.155338 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03247cdb-4055-4d47-b433-848e363768ab-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.155375 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.155409 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03247cdb-4055-4d47-b433-848e363768ab-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.155444 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58z2t\" (UniqueName: \"kubernetes.io/projected/03247cdb-4055-4d47-b433-848e363768ab-kube-api-access-58z2t\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.155466 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03247cdb-4055-4d47-b433-848e363768ab-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.155490 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03247cdb-4055-4d47-b433-848e363768ab-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.156052 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.156135 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03247cdb-4055-4d47-b433-848e363768ab-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.156053 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03247cdb-4055-4d47-b433-848e363768ab-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.156747 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03247cdb-4055-4d47-b433-848e363768ab-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.157224 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03247cdb-4055-4d47-b433-848e363768ab-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.158772 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03247cdb-4055-4d47-b433-848e363768ab-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.158824 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03247cdb-4055-4d47-b433-848e363768ab-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.159261 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03247cdb-4055-4d47-b433-848e363768ab-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.160074 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03247cdb-4055-4d47-b433-848e363768ab-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.160729 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03247cdb-4055-4d47-b433-848e363768ab-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.176186 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58z2t\" (UniqueName: \"kubernetes.io/projected/03247cdb-4055-4d47-b433-848e363768ab-kube-api-access-58z2t\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.184404 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"03247cdb-4055-4d47-b433-848e363768ab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.286608 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.808532 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 15:33:00 crc kubenswrapper[4810]: I0219 15:33:00.875075 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b86448c3-669a-4132-b8ab-4db06347fa10","Type":"ContainerStarted","Data":"399aaf0177600c18b47a56dbdca5d87c12cbf5e9ea295a08327d01e27b1b0566"} Feb 19 15:33:01 crc kubenswrapper[4810]: I0219 15:33:01.449833 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3676ed-f06f-4dea-82a1-959716331113" path="/var/lib/kubelet/pods/2a3676ed-f06f-4dea-82a1-959716331113/volumes" Feb 19 15:33:01 crc kubenswrapper[4810]: I0219 15:33:01.888210 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"03247cdb-4055-4d47-b433-848e363768ab","Type":"ContainerStarted","Data":"185b4c5740d4df6690def4329f9bd23e57daa2dbba3c22414e45dd1a66711b20"} Feb 19 15:33:01 crc kubenswrapper[4810]: I0219 15:33:01.890109 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b86448c3-669a-4132-b8ab-4db06347fa10","Type":"ContainerStarted","Data":"01f710572014ea30c64f1b2a62a873ecb5702426460d2b063c7df96d0df38fa3"} Feb 19 15:33:02 crc kubenswrapper[4810]: I0219 15:33:02.902592 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"03247cdb-4055-4d47-b433-848e363768ab","Type":"ContainerStarted","Data":"aeb8c5c6a71d86ecc6aba3fc315503cd296a83a4195aac56a6884ee1e7fef305"} Feb 19 15:33:02 crc kubenswrapper[4810]: I0219 15:33:02.921913 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:33:02 crc kubenswrapper[4810]: I0219 15:33:02.921996 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:33:04 crc kubenswrapper[4810]: I0219 15:33:04.000749 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-955rr" podUID="783f4f81-b0fb-49c4-9b07-a2715641355a" containerName="registry-server" probeResult="failure" output=< Feb 19 15:33:04 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 15:33:04 crc kubenswrapper[4810]: > Feb 19 15:33:07 crc kubenswrapper[4810]: I0219 15:33:07.991620 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7754685579-pj5bz"] Feb 19 15:33:07 crc kubenswrapper[4810]: I0219 15:33:07.994667 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:07 crc kubenswrapper[4810]: I0219 15:33:07.998776 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.009590 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7754685579-pj5bz"] Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.010797 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-dns-swift-storage-0\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.010906 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.011090 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgmt7\" (UniqueName: \"kubernetes.io/projected/0c3acd8a-696a-4a86-9052-03ef2cca79c7-kube-api-access-mgmt7\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.011284 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-ovsdbserver-nb\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.011443 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-ovsdbserver-sb\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.011617 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-dns-svc\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.011729 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-config\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.113148 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-dns-swift-storage-0\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.113229 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.113300 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgmt7\" (UniqueName: \"kubernetes.io/projected/0c3acd8a-696a-4a86-9052-03ef2cca79c7-kube-api-access-mgmt7\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.113367 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-ovsdbserver-nb\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.113413 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-ovsdbserver-sb\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.113473 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-dns-svc\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.113520 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-config\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.114270 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-dns-swift-storage-0\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.114275 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.114438 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-ovsdbserver-sb\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.114559 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-ovsdbserver-nb\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.114579 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-dns-svc\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.114639 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-config\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.135681 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgmt7\" (UniqueName: \"kubernetes.io/projected/0c3acd8a-696a-4a86-9052-03ef2cca79c7-kube-api-access-mgmt7\") pod \"dnsmasq-dns-7754685579-pj5bz\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.321444 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.802388 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7754685579-pj5bz"] Feb 19 15:33:08 crc kubenswrapper[4810]: I0219 15:33:08.967791 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7754685579-pj5bz" event={"ID":"0c3acd8a-696a-4a86-9052-03ef2cca79c7","Type":"ContainerStarted","Data":"e6484f856f1dabe83fb7c154c7f3ffdc7af18bb3f81e4c632668b443bfbf7736"} Feb 19 15:33:09 crc kubenswrapper[4810]: I0219 15:33:09.981349 4810 generic.go:334] "Generic (PLEG): container finished" podID="0c3acd8a-696a-4a86-9052-03ef2cca79c7" containerID="493902d9834ff016c9ae111e5d259b86d62b85d4780006bae12de68a9a933a23" exitCode=0 Feb 19 15:33:09 crc kubenswrapper[4810]: I0219 15:33:09.981402 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7754685579-pj5bz" event={"ID":"0c3acd8a-696a-4a86-9052-03ef2cca79c7","Type":"ContainerDied","Data":"493902d9834ff016c9ae111e5d259b86d62b85d4780006bae12de68a9a933a23"} Feb 19 15:33:10 crc kubenswrapper[4810]: I0219 15:33:10.995913 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7754685579-pj5bz" event={"ID":"0c3acd8a-696a-4a86-9052-03ef2cca79c7","Type":"ContainerStarted","Data":"7c6881386ea9e17edcb266f7d41c368100ad0a747bba3ae5f2c27278cd9f5864"} Feb 19 15:33:10 crc kubenswrapper[4810]: I0219 15:33:10.996177 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:11 crc kubenswrapper[4810]: I0219 15:33:11.038403 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7754685579-pj5bz" podStartSLOduration=4.038382157 podStartE2EDuration="4.038382157s" podCreationTimestamp="2026-02-19 15:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:33:11.024234528 +0000 UTC m=+1420.506264682" watchObservedRunningTime="2026-02-19 15:33:11.038382157 +0000 UTC m=+1420.520412291" Feb 19 15:33:13 crc kubenswrapper[4810]: I0219 15:33:13.005790 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:33:13 crc kubenswrapper[4810]: I0219 15:33:13.055397 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:33:13 crc kubenswrapper[4810]: I0219 15:33:13.255790 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-955rr"] Feb 19 15:33:14 crc kubenswrapper[4810]: I0219 15:33:14.033512 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-955rr" podUID="783f4f81-b0fb-49c4-9b07-a2715641355a" containerName="registry-server" containerID="cri-o://1a4461610ed2fb313a8e62f0315dc40b8ce3a3e6a0f3690df259b0b16d63375f" gracePeriod=2 Feb 19 15:33:14 crc kubenswrapper[4810]: I0219 15:33:14.562976 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:33:14 crc kubenswrapper[4810]: I0219 15:33:14.756076 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783f4f81-b0fb-49c4-9b07-a2715641355a-utilities\") pod \"783f4f81-b0fb-49c4-9b07-a2715641355a\" (UID: \"783f4f81-b0fb-49c4-9b07-a2715641355a\") " Feb 19 15:33:14 crc kubenswrapper[4810]: I0219 15:33:14.756176 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrjnp\" (UniqueName: \"kubernetes.io/projected/783f4f81-b0fb-49c4-9b07-a2715641355a-kube-api-access-hrjnp\") pod \"783f4f81-b0fb-49c4-9b07-a2715641355a\" (UID: \"783f4f81-b0fb-49c4-9b07-a2715641355a\") " Feb 19 15:33:14 crc kubenswrapper[4810]: I0219 15:33:14.756591 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783f4f81-b0fb-49c4-9b07-a2715641355a-catalog-content\") pod \"783f4f81-b0fb-49c4-9b07-a2715641355a\" (UID: \"783f4f81-b0fb-49c4-9b07-a2715641355a\") " Feb 19 15:33:14 crc kubenswrapper[4810]: I0219 15:33:14.757403 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/783f4f81-b0fb-49c4-9b07-a2715641355a-utilities" (OuterVolumeSpecName: "utilities") pod "783f4f81-b0fb-49c4-9b07-a2715641355a" (UID: "783f4f81-b0fb-49c4-9b07-a2715641355a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:33:14 crc kubenswrapper[4810]: I0219 15:33:14.769121 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/783f4f81-b0fb-49c4-9b07-a2715641355a-kube-api-access-hrjnp" (OuterVolumeSpecName: "kube-api-access-hrjnp") pod "783f4f81-b0fb-49c4-9b07-a2715641355a" (UID: "783f4f81-b0fb-49c4-9b07-a2715641355a"). InnerVolumeSpecName "kube-api-access-hrjnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:33:14 crc kubenswrapper[4810]: I0219 15:33:14.859450 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783f4f81-b0fb-49c4-9b07-a2715641355a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:14 crc kubenswrapper[4810]: I0219 15:33:14.859786 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrjnp\" (UniqueName: \"kubernetes.io/projected/783f4f81-b0fb-49c4-9b07-a2715641355a-kube-api-access-hrjnp\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:14 crc kubenswrapper[4810]: I0219 15:33:14.880098 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/783f4f81-b0fb-49c4-9b07-a2715641355a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "783f4f81-b0fb-49c4-9b07-a2715641355a" (UID: "783f4f81-b0fb-49c4-9b07-a2715641355a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:33:14 crc kubenswrapper[4810]: I0219 15:33:14.961116 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783f4f81-b0fb-49c4-9b07-a2715641355a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.056595 4810 generic.go:334] "Generic (PLEG): container finished" podID="783f4f81-b0fb-49c4-9b07-a2715641355a" containerID="1a4461610ed2fb313a8e62f0315dc40b8ce3a3e6a0f3690df259b0b16d63375f" exitCode=0 Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.056664 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-955rr" event={"ID":"783f4f81-b0fb-49c4-9b07-a2715641355a","Type":"ContainerDied","Data":"1a4461610ed2fb313a8e62f0315dc40b8ce3a3e6a0f3690df259b0b16d63375f"} Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.056723 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-955rr" event={"ID":"783f4f81-b0fb-49c4-9b07-a2715641355a","Type":"ContainerDied","Data":"75d6325f144c755c14a8b6930d4d42def83ced3289d5cc9615e327af0d069534"} Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.056741 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-955rr" Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.056996 4810 scope.go:117] "RemoveContainer" containerID="1a4461610ed2fb313a8e62f0315dc40b8ce3a3e6a0f3690df259b0b16d63375f" Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.104149 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-955rr"] Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.106917 4810 scope.go:117] "RemoveContainer" containerID="6bc2e1d7197343d97aad8e795e58bf89ffecf5750c4972eb72fee6305a410fbc" Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.123943 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-955rr"] Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.136203 4810 scope.go:117] "RemoveContainer" containerID="761f02a235bc3255020cf4a80c96bbf0e5db299cb80ff40640ab5d09947f186d" Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.187969 4810 scope.go:117] "RemoveContainer" containerID="1a4461610ed2fb313a8e62f0315dc40b8ce3a3e6a0f3690df259b0b16d63375f" Feb 19 15:33:15 crc kubenswrapper[4810]: E0219 15:33:15.188466 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a4461610ed2fb313a8e62f0315dc40b8ce3a3e6a0f3690df259b0b16d63375f\": container with ID starting with 1a4461610ed2fb313a8e62f0315dc40b8ce3a3e6a0f3690df259b0b16d63375f not found: ID does not exist" containerID="1a4461610ed2fb313a8e62f0315dc40b8ce3a3e6a0f3690df259b0b16d63375f" Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.188525 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a4461610ed2fb313a8e62f0315dc40b8ce3a3e6a0f3690df259b0b16d63375f"} err="failed to get container status \"1a4461610ed2fb313a8e62f0315dc40b8ce3a3e6a0f3690df259b0b16d63375f\": rpc error: code = NotFound desc = could not find container \"1a4461610ed2fb313a8e62f0315dc40b8ce3a3e6a0f3690df259b0b16d63375f\": container with ID starting with 1a4461610ed2fb313a8e62f0315dc40b8ce3a3e6a0f3690df259b0b16d63375f not found: ID does not exist" Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.188563 4810 scope.go:117] "RemoveContainer" containerID="6bc2e1d7197343d97aad8e795e58bf89ffecf5750c4972eb72fee6305a410fbc" Feb 19 15:33:15 crc kubenswrapper[4810]: E0219 15:33:15.188976 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc2e1d7197343d97aad8e795e58bf89ffecf5750c4972eb72fee6305a410fbc\": container with ID starting with 6bc2e1d7197343d97aad8e795e58bf89ffecf5750c4972eb72fee6305a410fbc not found: ID does not exist" containerID="6bc2e1d7197343d97aad8e795e58bf89ffecf5750c4972eb72fee6305a410fbc" Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.189023 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc2e1d7197343d97aad8e795e58bf89ffecf5750c4972eb72fee6305a410fbc"} err="failed to get container status \"6bc2e1d7197343d97aad8e795e58bf89ffecf5750c4972eb72fee6305a410fbc\": rpc error: code = NotFound desc = could not find container \"6bc2e1d7197343d97aad8e795e58bf89ffecf5750c4972eb72fee6305a410fbc\": container with ID starting with 6bc2e1d7197343d97aad8e795e58bf89ffecf5750c4972eb72fee6305a410fbc not found: ID does not exist" Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.189050 4810 scope.go:117] "RemoveContainer" containerID="761f02a235bc3255020cf4a80c96bbf0e5db299cb80ff40640ab5d09947f186d" Feb 19 15:33:15 crc kubenswrapper[4810]: E0219 15:33:15.189373 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"761f02a235bc3255020cf4a80c96bbf0e5db299cb80ff40640ab5d09947f186d\": container with ID starting with 761f02a235bc3255020cf4a80c96bbf0e5db299cb80ff40640ab5d09947f186d not found: ID does not exist" containerID="761f02a235bc3255020cf4a80c96bbf0e5db299cb80ff40640ab5d09947f186d" Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.189435 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"761f02a235bc3255020cf4a80c96bbf0e5db299cb80ff40640ab5d09947f186d"} err="failed to get container status \"761f02a235bc3255020cf4a80c96bbf0e5db299cb80ff40640ab5d09947f186d\": rpc error: code = NotFound desc = could not find container \"761f02a235bc3255020cf4a80c96bbf0e5db299cb80ff40640ab5d09947f186d\": container with ID starting with 761f02a235bc3255020cf4a80c96bbf0e5db299cb80ff40640ab5d09947f186d not found: ID does not exist" Feb 19 15:33:15 crc kubenswrapper[4810]: I0219 15:33:15.454311 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="783f4f81-b0fb-49c4-9b07-a2715641355a" path="/var/lib/kubelet/pods/783f4f81-b0fb-49c4-9b07-a2715641355a/volumes" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.323651 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.414725 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dc9fb8849-t2gx5"] Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.415039 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" podUID="275a98c0-8e6a-4587-8628-54f70b836615" containerName="dnsmasq-dns" containerID="cri-o://ec7d0dde11f4d91dfc1709e9d373983bc5e0931480a8ae6b522e10ab70a1f7fc" gracePeriod=10 Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.591276 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685d6df875-6hghq"] Feb 19 15:33:18 crc kubenswrapper[4810]: E0219 15:33:18.591843 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783f4f81-b0fb-49c4-9b07-a2715641355a" containerName="registry-server" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.591865 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="783f4f81-b0fb-49c4-9b07-a2715641355a" containerName="registry-server" Feb 19 15:33:18 crc kubenswrapper[4810]: E0219 15:33:18.591886 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783f4f81-b0fb-49c4-9b07-a2715641355a" containerName="extract-utilities" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.591895 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="783f4f81-b0fb-49c4-9b07-a2715641355a" containerName="extract-utilities" Feb 19 15:33:18 crc kubenswrapper[4810]: E0219 15:33:18.591918 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783f4f81-b0fb-49c4-9b07-a2715641355a" containerName="extract-content" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.591925 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="783f4f81-b0fb-49c4-9b07-a2715641355a" containerName="extract-content" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.592173 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="783f4f81-b0fb-49c4-9b07-a2715641355a" containerName="registry-server" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.593694 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.601067 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685d6df875-6hghq"] Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.749391 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gxvs\" (UniqueName: \"kubernetes.io/projected/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-kube-api-access-7gxvs\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.749453 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-dns-swift-storage-0\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.749961 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-dns-svc\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.750008 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-config\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.750110 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-openstack-edpm-ipam\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.750161 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-ovsdbserver-sb\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.750235 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-ovsdbserver-nb\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.853542 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-ovsdbserver-nb\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.853930 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gxvs\" (UniqueName: \"kubernetes.io/projected/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-kube-api-access-7gxvs\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.853958 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-dns-swift-storage-0\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.853991 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-dns-svc\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.854020 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-config\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.854087 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-openstack-edpm-ipam\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.854121 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-ovsdbserver-sb\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.854893 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-dns-swift-storage-0\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.855394 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-ovsdbserver-nb\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.856166 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-config\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.856597 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-dns-svc\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.856664 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-openstack-edpm-ipam\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.858816 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-ovsdbserver-sb\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.885384 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gxvs\" (UniqueName: \"kubernetes.io/projected/7c074feb-2f7c-4f84-9ea8-5a9062e6b10a-kube-api-access-7gxvs\") pod \"dnsmasq-dns-685d6df875-6hghq\" (UID: \"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a\") " pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:18 crc kubenswrapper[4810]: I0219 15:33:18.915768 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.085155 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.137311 4810 generic.go:334] "Generic (PLEG): container finished" podID="275a98c0-8e6a-4587-8628-54f70b836615" containerID="ec7d0dde11f4d91dfc1709e9d373983bc5e0931480a8ae6b522e10ab70a1f7fc" exitCode=0 Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.137573 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" event={"ID":"275a98c0-8e6a-4587-8628-54f70b836615","Type":"ContainerDied","Data":"ec7d0dde11f4d91dfc1709e9d373983bc5e0931480a8ae6b522e10ab70a1f7fc"} Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.137598 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" event={"ID":"275a98c0-8e6a-4587-8628-54f70b836615","Type":"ContainerDied","Data":"8c49db3d1d18d97ef799aa2fe03c13f1f635369a4b0477795ad010b6baa6d941"} Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.137616 4810 scope.go:117] "RemoveContainer" containerID="ec7d0dde11f4d91dfc1709e9d373983bc5e0931480a8ae6b522e10ab70a1f7fc" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.137719 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc9fb8849-t2gx5" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.161620 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-dns-svc\") pod \"275a98c0-8e6a-4587-8628-54f70b836615\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.161720 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-config\") pod \"275a98c0-8e6a-4587-8628-54f70b836615\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.161883 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9xn9\" (UniqueName: \"kubernetes.io/projected/275a98c0-8e6a-4587-8628-54f70b836615-kube-api-access-t9xn9\") pod \"275a98c0-8e6a-4587-8628-54f70b836615\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.161913 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-dns-swift-storage-0\") pod \"275a98c0-8e6a-4587-8628-54f70b836615\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.162038 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-ovsdbserver-nb\") pod \"275a98c0-8e6a-4587-8628-54f70b836615\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.162077 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-ovsdbserver-sb\") pod \"275a98c0-8e6a-4587-8628-54f70b836615\" (UID: \"275a98c0-8e6a-4587-8628-54f70b836615\") " Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.168266 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275a98c0-8e6a-4587-8628-54f70b836615-kube-api-access-t9xn9" (OuterVolumeSpecName: "kube-api-access-t9xn9") pod "275a98c0-8e6a-4587-8628-54f70b836615" (UID: "275a98c0-8e6a-4587-8628-54f70b836615"). InnerVolumeSpecName "kube-api-access-t9xn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.188342 4810 scope.go:117] "RemoveContainer" containerID="87b0b7637ffa6ceedf744033bed56aa64044ea8ed1fd841f1129813c92a9043f" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.221000 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-config" (OuterVolumeSpecName: "config") pod "275a98c0-8e6a-4587-8628-54f70b836615" (UID: "275a98c0-8e6a-4587-8628-54f70b836615"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.222092 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "275a98c0-8e6a-4587-8628-54f70b836615" (UID: "275a98c0-8e6a-4587-8628-54f70b836615"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.239633 4810 scope.go:117] "RemoveContainer" containerID="ec7d0dde11f4d91dfc1709e9d373983bc5e0931480a8ae6b522e10ab70a1f7fc" Feb 19 15:33:19 crc kubenswrapper[4810]: E0219 15:33:19.240047 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec7d0dde11f4d91dfc1709e9d373983bc5e0931480a8ae6b522e10ab70a1f7fc\": container with ID starting with ec7d0dde11f4d91dfc1709e9d373983bc5e0931480a8ae6b522e10ab70a1f7fc not found: ID does not exist" containerID="ec7d0dde11f4d91dfc1709e9d373983bc5e0931480a8ae6b522e10ab70a1f7fc" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.240090 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec7d0dde11f4d91dfc1709e9d373983bc5e0931480a8ae6b522e10ab70a1f7fc"} err="failed to get container status \"ec7d0dde11f4d91dfc1709e9d373983bc5e0931480a8ae6b522e10ab70a1f7fc\": rpc error: code = NotFound desc = could not find container \"ec7d0dde11f4d91dfc1709e9d373983bc5e0931480a8ae6b522e10ab70a1f7fc\": container with ID starting with ec7d0dde11f4d91dfc1709e9d373983bc5e0931480a8ae6b522e10ab70a1f7fc not found: ID does not exist" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.240115 4810 scope.go:117] "RemoveContainer" containerID="87b0b7637ffa6ceedf744033bed56aa64044ea8ed1fd841f1129813c92a9043f" Feb 19 15:33:19 crc kubenswrapper[4810]: E0219 15:33:19.240644 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87b0b7637ffa6ceedf744033bed56aa64044ea8ed1fd841f1129813c92a9043f\": container with ID starting with 87b0b7637ffa6ceedf744033bed56aa64044ea8ed1fd841f1129813c92a9043f not found: ID does not exist" containerID="87b0b7637ffa6ceedf744033bed56aa64044ea8ed1fd841f1129813c92a9043f" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.240682 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87b0b7637ffa6ceedf744033bed56aa64044ea8ed1fd841f1129813c92a9043f"} err="failed to get container status \"87b0b7637ffa6ceedf744033bed56aa64044ea8ed1fd841f1129813c92a9043f\": rpc error: code = NotFound desc = could not find container \"87b0b7637ffa6ceedf744033bed56aa64044ea8ed1fd841f1129813c92a9043f\": container with ID starting with 87b0b7637ffa6ceedf744033bed56aa64044ea8ed1fd841f1129813c92a9043f not found: ID does not exist" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.243976 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "275a98c0-8e6a-4587-8628-54f70b836615" (UID: "275a98c0-8e6a-4587-8628-54f70b836615"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.245438 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "275a98c0-8e6a-4587-8628-54f70b836615" (UID: "275a98c0-8e6a-4587-8628-54f70b836615"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.247209 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "275a98c0-8e6a-4587-8628-54f70b836615" (UID: "275a98c0-8e6a-4587-8628-54f70b836615"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.265419 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9xn9\" (UniqueName: \"kubernetes.io/projected/275a98c0-8e6a-4587-8628-54f70b836615-kube-api-access-t9xn9\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.265466 4810 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.265477 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.265485 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.265494 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.265503 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/275a98c0-8e6a-4587-8628-54f70b836615-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.482237 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dc9fb8849-t2gx5"] Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.496016 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dc9fb8849-t2gx5"] Feb 19 15:33:19 crc kubenswrapper[4810]: W0219 15:33:19.522888 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c074feb_2f7c_4f84_9ea8_5a9062e6b10a.slice/crio-6c77a5fd184196ecb07c7282f62afdb7d7e64c0c1fbb7ec04e0521259c9ec772 WatchSource:0}: Error finding container 6c77a5fd184196ecb07c7282f62afdb7d7e64c0c1fbb7ec04e0521259c9ec772: Status 404 returned error can't find the container with id 6c77a5fd184196ecb07c7282f62afdb7d7e64c0c1fbb7ec04e0521259c9ec772 Feb 19 15:33:19 crc kubenswrapper[4810]: I0219 15:33:19.525980 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685d6df875-6hghq"] Feb 19 15:33:20 crc kubenswrapper[4810]: I0219 15:33:20.151767 4810 generic.go:334] "Generic (PLEG): container finished" podID="7c074feb-2f7c-4f84-9ea8-5a9062e6b10a" containerID="a696b0db87b83e84ced11bfb6a883121309718434a9f476eae5a74820db73b13" exitCode=0 Feb 19 15:33:20 crc kubenswrapper[4810]: I0219 15:33:20.151849 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685d6df875-6hghq" event={"ID":"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a","Type":"ContainerDied","Data":"a696b0db87b83e84ced11bfb6a883121309718434a9f476eae5a74820db73b13"} Feb 19 15:33:20 crc kubenswrapper[4810]: I0219 15:33:20.151899 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685d6df875-6hghq" event={"ID":"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a","Type":"ContainerStarted","Data":"6c77a5fd184196ecb07c7282f62afdb7d7e64c0c1fbb7ec04e0521259c9ec772"} Feb 19 15:33:21 crc kubenswrapper[4810]: I0219 15:33:21.170304 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685d6df875-6hghq" event={"ID":"7c074feb-2f7c-4f84-9ea8-5a9062e6b10a","Type":"ContainerStarted","Data":"00c970a164c833e0d9c62aa4f69220908502eb8fc2336adb5ca376b8c779d390"} Feb 19 15:33:21 crc kubenswrapper[4810]: I0219 15:33:21.170823 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:21 crc kubenswrapper[4810]: I0219 15:33:21.225251 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-685d6df875-6hghq" podStartSLOduration=3.225229719 podStartE2EDuration="3.225229719s" podCreationTimestamp="2026-02-19 15:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:33:21.204912756 +0000 UTC m=+1430.686942900" watchObservedRunningTime="2026-02-19 15:33:21.225229719 +0000 UTC m=+1430.707259853" Feb 19 15:33:21 crc kubenswrapper[4810]: I0219 15:33:21.454874 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275a98c0-8e6a-4587-8628-54f70b836615" path="/var/lib/kubelet/pods/275a98c0-8e6a-4587-8628-54f70b836615/volumes" Feb 19 15:33:28 crc kubenswrapper[4810]: I0219 15:33:28.918587 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-685d6df875-6hghq" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.006059 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7754685579-pj5bz"] Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.006559 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7754685579-pj5bz" podUID="0c3acd8a-696a-4a86-9052-03ef2cca79c7" containerName="dnsmasq-dns" containerID="cri-o://7c6881386ea9e17edcb266f7d41c368100ad0a747bba3ae5f2c27278cd9f5864" gracePeriod=10 Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.283032 4810 generic.go:334] "Generic (PLEG): container finished" podID="0c3acd8a-696a-4a86-9052-03ef2cca79c7" containerID="7c6881386ea9e17edcb266f7d41c368100ad0a747bba3ae5f2c27278cd9f5864" exitCode=0 Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.283303 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7754685579-pj5bz" event={"ID":"0c3acd8a-696a-4a86-9052-03ef2cca79c7","Type":"ContainerDied","Data":"7c6881386ea9e17edcb266f7d41c368100ad0a747bba3ae5f2c27278cd9f5864"} Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.563592 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.695198 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-config\") pod \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.695305 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-dns-svc\") pod \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.695382 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-ovsdbserver-nb\") pod \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.695448 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-openstack-edpm-ipam\") pod \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.695474 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-ovsdbserver-sb\") pod \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.695510 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgmt7\" (UniqueName: \"kubernetes.io/projected/0c3acd8a-696a-4a86-9052-03ef2cca79c7-kube-api-access-mgmt7\") pod \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.695587 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-dns-swift-storage-0\") pod \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\" (UID: \"0c3acd8a-696a-4a86-9052-03ef2cca79c7\") " Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.707002 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c3acd8a-696a-4a86-9052-03ef2cca79c7-kube-api-access-mgmt7" (OuterVolumeSpecName: "kube-api-access-mgmt7") pod "0c3acd8a-696a-4a86-9052-03ef2cca79c7" (UID: "0c3acd8a-696a-4a86-9052-03ef2cca79c7"). InnerVolumeSpecName "kube-api-access-mgmt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.751187 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0c3acd8a-696a-4a86-9052-03ef2cca79c7" (UID: "0c3acd8a-696a-4a86-9052-03ef2cca79c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.757531 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0c3acd8a-696a-4a86-9052-03ef2cca79c7" (UID: "0c3acd8a-696a-4a86-9052-03ef2cca79c7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.761446 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "0c3acd8a-696a-4a86-9052-03ef2cca79c7" (UID: "0c3acd8a-696a-4a86-9052-03ef2cca79c7"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.762301 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-config" (OuterVolumeSpecName: "config") pod "0c3acd8a-696a-4a86-9052-03ef2cca79c7" (UID: "0c3acd8a-696a-4a86-9052-03ef2cca79c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.774199 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0c3acd8a-696a-4a86-9052-03ef2cca79c7" (UID: "0c3acd8a-696a-4a86-9052-03ef2cca79c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.783802 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c3acd8a-696a-4a86-9052-03ef2cca79c7" (UID: "0c3acd8a-696a-4a86-9052-03ef2cca79c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.797784 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.797827 4810 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.797841 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.797855 4810 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.797866 4810 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.797879 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgmt7\" (UniqueName: \"kubernetes.io/projected/0c3acd8a-696a-4a86-9052-03ef2cca79c7-kube-api-access-mgmt7\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:29 crc kubenswrapper[4810]: I0219 15:33:29.797890 4810 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c3acd8a-696a-4a86-9052-03ef2cca79c7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:33:30 crc kubenswrapper[4810]: I0219 15:33:30.295299 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7754685579-pj5bz" event={"ID":"0c3acd8a-696a-4a86-9052-03ef2cca79c7","Type":"ContainerDied","Data":"e6484f856f1dabe83fb7c154c7f3ffdc7af18bb3f81e4c632668b443bfbf7736"} Feb 19 15:33:30 crc kubenswrapper[4810]: I0219 15:33:30.295375 4810 scope.go:117] "RemoveContainer" containerID="7c6881386ea9e17edcb266f7d41c368100ad0a747bba3ae5f2c27278cd9f5864" Feb 19 15:33:30 crc kubenswrapper[4810]: I0219 15:33:30.295442 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7754685579-pj5bz" Feb 19 15:33:30 crc kubenswrapper[4810]: I0219 15:33:30.404380 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7754685579-pj5bz"] Feb 19 15:33:30 crc kubenswrapper[4810]: I0219 15:33:30.432611 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7754685579-pj5bz"] Feb 19 15:33:30 crc kubenswrapper[4810]: I0219 15:33:30.489527 4810 scope.go:117] "RemoveContainer" containerID="493902d9834ff016c9ae111e5d259b86d62b85d4780006bae12de68a9a933a23" Feb 19 15:33:31 crc kubenswrapper[4810]: I0219 15:33:31.464677 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c3acd8a-696a-4a86-9052-03ef2cca79c7" path="/var/lib/kubelet/pods/0c3acd8a-696a-4a86-9052-03ef2cca79c7/volumes" Feb 19 15:33:34 crc kubenswrapper[4810]: I0219 15:33:34.343758 4810 generic.go:334] "Generic (PLEG): container finished" podID="b86448c3-669a-4132-b8ab-4db06347fa10" containerID="01f710572014ea30c64f1b2a62a873ecb5702426460d2b063c7df96d0df38fa3" exitCode=0 Feb 19 15:33:34 crc kubenswrapper[4810]: I0219 15:33:34.343961 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b86448c3-669a-4132-b8ab-4db06347fa10","Type":"ContainerDied","Data":"01f710572014ea30c64f1b2a62a873ecb5702426460d2b063c7df96d0df38fa3"} Feb 19 15:33:35 crc kubenswrapper[4810]: I0219 15:33:35.378301 4810 generic.go:334] "Generic (PLEG): container finished" podID="03247cdb-4055-4d47-b433-848e363768ab" containerID="aeb8c5c6a71d86ecc6aba3fc315503cd296a83a4195aac56a6884ee1e7fef305" exitCode=0 Feb 19 15:33:35 crc kubenswrapper[4810]: I0219 15:33:35.378369 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"03247cdb-4055-4d47-b433-848e363768ab","Type":"ContainerDied","Data":"aeb8c5c6a71d86ecc6aba3fc315503cd296a83a4195aac56a6884ee1e7fef305"} Feb 19 15:33:35 crc kubenswrapper[4810]: I0219 15:33:35.405807 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b86448c3-669a-4132-b8ab-4db06347fa10","Type":"ContainerStarted","Data":"dd506e1ca426d2c85d2a66a3c84ad0452ba42d9045328da00c06a257b5a9e728"} Feb 19 15:33:35 crc kubenswrapper[4810]: I0219 15:33:35.406783 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 15:33:35 crc kubenswrapper[4810]: I0219 15:33:35.487365 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.487344181 podStartE2EDuration="36.487344181s" podCreationTimestamp="2026-02-19 15:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:33:35.477738138 +0000 UTC m=+1444.959768262" watchObservedRunningTime="2026-02-19 15:33:35.487344181 +0000 UTC m=+1444.969374305" Feb 19 15:33:36 crc kubenswrapper[4810]: I0219 15:33:36.417952 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"03247cdb-4055-4d47-b433-848e363768ab","Type":"ContainerStarted","Data":"8748ae10090a4ae5b6bfbf65ab86cae3b394f0c14ed3d986cfc40122c35f530a"} Feb 19 15:33:36 crc kubenswrapper[4810]: I0219 15:33:36.418634 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:36 crc kubenswrapper[4810]: I0219 15:33:36.447908 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.447889262 podStartE2EDuration="37.447889262s" podCreationTimestamp="2026-02-19 15:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:33:36.443259265 +0000 UTC m=+1445.925289389" watchObservedRunningTime="2026-02-19 15:33:36.447889262 +0000 UTC m=+1445.929919386" Feb 19 15:33:39 crc kubenswrapper[4810]: I0219 15:33:39.331642 4810 scope.go:117] "RemoveContainer" containerID="1d10f5e352636a23ee8d873911fea2dc7821da75356614e2daef6d4813ea231e" Feb 19 15:33:39 crc kubenswrapper[4810]: I0219 15:33:39.362607 4810 scope.go:117] "RemoveContainer" containerID="bcd810514d656b151586b085915c58b159ffd3f83f716b74f5c945866a1aa802" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.744249 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb"] Feb 19 15:33:42 crc kubenswrapper[4810]: E0219 15:33:42.745771 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c3acd8a-696a-4a86-9052-03ef2cca79c7" containerName="dnsmasq-dns" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.745794 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c3acd8a-696a-4a86-9052-03ef2cca79c7" containerName="dnsmasq-dns" Feb 19 15:33:42 crc kubenswrapper[4810]: E0219 15:33:42.745807 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c3acd8a-696a-4a86-9052-03ef2cca79c7" containerName="init" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.745814 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c3acd8a-696a-4a86-9052-03ef2cca79c7" containerName="init" Feb 19 15:33:42 crc kubenswrapper[4810]: E0219 15:33:42.745829 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275a98c0-8e6a-4587-8628-54f70b836615" containerName="init" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.745837 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="275a98c0-8e6a-4587-8628-54f70b836615" containerName="init" Feb 19 15:33:42 crc kubenswrapper[4810]: E0219 15:33:42.745878 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275a98c0-8e6a-4587-8628-54f70b836615" containerName="dnsmasq-dns" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.745884 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="275a98c0-8e6a-4587-8628-54f70b836615" containerName="dnsmasq-dns" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.746107 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="275a98c0-8e6a-4587-8628-54f70b836615" containerName="dnsmasq-dns" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.746133 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c3acd8a-696a-4a86-9052-03ef2cca79c7" containerName="dnsmasq-dns" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.747146 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.749925 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.750150 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.753208 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.753295 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.771509 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb"] Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.797947 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-484qb\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.798199 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-484qb\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.798256 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb8t5\" (UniqueName: \"kubernetes.io/projected/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-kube-api-access-zb8t5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-484qb\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.798679 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-484qb\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.900479 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-484qb\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.900523 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb8t5\" (UniqueName: \"kubernetes.io/projected/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-kube-api-access-zb8t5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-484qb\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.900596 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-484qb\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.900704 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-484qb\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.906361 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-484qb\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.906472 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-484qb\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.907552 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-484qb\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:42 crc kubenswrapper[4810]: I0219 15:33:42.917506 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb8t5\" (UniqueName: \"kubernetes.io/projected/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-kube-api-access-zb8t5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-484qb\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:43 crc kubenswrapper[4810]: I0219 15:33:43.077241 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:33:43 crc kubenswrapper[4810]: I0219 15:33:43.889413 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb"] Feb 19 15:33:44 crc kubenswrapper[4810]: I0219 15:33:44.508504 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" event={"ID":"8c05e8c7-82f6-4ef1-a576-3c84e70dc570","Type":"ContainerStarted","Data":"086440bfa5f26ba1363eecaa5c50943356fef7aea25c05779628eb95ef3c57b2"} Feb 19 15:33:49 crc kubenswrapper[4810]: I0219 15:33:49.546601 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 15:33:50 crc kubenswrapper[4810]: I0219 15:33:50.291523 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 15:33:55 crc kubenswrapper[4810]: I0219 15:33:55.613320 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" event={"ID":"8c05e8c7-82f6-4ef1-a576-3c84e70dc570","Type":"ContainerStarted","Data":"fa5950c0010d836c770fe60039a24c6338efe9c7a11d0ea864b9fcf4b45f1ebc"} Feb 19 15:33:55 crc kubenswrapper[4810]: I0219 15:33:55.639583 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" podStartSLOduration=3.153378544 podStartE2EDuration="13.639568329s" podCreationTimestamp="2026-02-19 15:33:42 +0000 UTC" firstStartedPulling="2026-02-19 15:33:43.888166836 +0000 UTC m=+1453.370196960" lastFinishedPulling="2026-02-19 15:33:54.374356581 +0000 UTC m=+1463.856386745" observedRunningTime="2026-02-19 15:33:55.636813869 +0000 UTC m=+1465.118843993" watchObservedRunningTime="2026-02-19 15:33:55.639568329 +0000 UTC m=+1465.121598453" Feb 19 15:34:05 crc kubenswrapper[4810]: I0219 15:34:05.742026 4810 generic.go:334] "Generic (PLEG): container finished" podID="8c05e8c7-82f6-4ef1-a576-3c84e70dc570" containerID="fa5950c0010d836c770fe60039a24c6338efe9c7a11d0ea864b9fcf4b45f1ebc" exitCode=0 Feb 19 15:34:05 crc kubenswrapper[4810]: I0219 15:34:05.742085 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" event={"ID":"8c05e8c7-82f6-4ef1-a576-3c84e70dc570","Type":"ContainerDied","Data":"fa5950c0010d836c770fe60039a24c6338efe9c7a11d0ea864b9fcf4b45f1ebc"} Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.260748 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.333005 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb8t5\" (UniqueName: \"kubernetes.io/projected/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-kube-api-access-zb8t5\") pod \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.333484 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-inventory\") pod \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.333706 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-ssh-key-openstack-edpm-ipam\") pod \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.333955 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-repo-setup-combined-ca-bundle\") pod \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\" (UID: \"8c05e8c7-82f6-4ef1-a576-3c84e70dc570\") " Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.339054 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-kube-api-access-zb8t5" (OuterVolumeSpecName: "kube-api-access-zb8t5") pod "8c05e8c7-82f6-4ef1-a576-3c84e70dc570" (UID: "8c05e8c7-82f6-4ef1-a576-3c84e70dc570"). InnerVolumeSpecName "kube-api-access-zb8t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.340586 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8c05e8c7-82f6-4ef1-a576-3c84e70dc570" (UID: "8c05e8c7-82f6-4ef1-a576-3c84e70dc570"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.361461 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-inventory" (OuterVolumeSpecName: "inventory") pod "8c05e8c7-82f6-4ef1-a576-3c84e70dc570" (UID: "8c05e8c7-82f6-4ef1-a576-3c84e70dc570"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.365460 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c05e8c7-82f6-4ef1-a576-3c84e70dc570" (UID: "8c05e8c7-82f6-4ef1-a576-3c84e70dc570"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.436608 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.436642 4810 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.436653 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb8t5\" (UniqueName: \"kubernetes.io/projected/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-kube-api-access-zb8t5\") on node \"crc\" DevicePath \"\"" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.436667 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c05e8c7-82f6-4ef1-a576-3c84e70dc570-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.760544 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" event={"ID":"8c05e8c7-82f6-4ef1-a576-3c84e70dc570","Type":"ContainerDied","Data":"086440bfa5f26ba1363eecaa5c50943356fef7aea25c05779628eb95ef3c57b2"} Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.760886 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="086440bfa5f26ba1363eecaa5c50943356fef7aea25c05779628eb95ef3c57b2" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.760575 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-484qb" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.883754 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x"] Feb 19 15:34:07 crc kubenswrapper[4810]: E0219 15:34:07.884610 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c05e8c7-82f6-4ef1-a576-3c84e70dc570" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.884648 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c05e8c7-82f6-4ef1-a576-3c84e70dc570" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.885168 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c05e8c7-82f6-4ef1-a576-3c84e70dc570" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.886671 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.896686 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x"] Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.934613 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.935151 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.935679 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:34:07 crc kubenswrapper[4810]: I0219 15:34:07.936019 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:34:08 crc kubenswrapper[4810]: I0219 15:34:08.058708 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32dc9563-791b-421e-a807-41cc1e775b3a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x748x\" (UID: \"32dc9563-791b-421e-a807-41cc1e775b3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" Feb 19 15:34:08 crc kubenswrapper[4810]: I0219 15:34:08.058789 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32dc9563-791b-421e-a807-41cc1e775b3a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x748x\" (UID: \"32dc9563-791b-421e-a807-41cc1e775b3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" Feb 19 15:34:08 crc kubenswrapper[4810]: I0219 15:34:08.059046 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fskpf\" (UniqueName: \"kubernetes.io/projected/32dc9563-791b-421e-a807-41cc1e775b3a-kube-api-access-fskpf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x748x\" (UID: \"32dc9563-791b-421e-a807-41cc1e775b3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" Feb 19 15:34:08 crc kubenswrapper[4810]: I0219 15:34:08.161615 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32dc9563-791b-421e-a807-41cc1e775b3a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x748x\" (UID: \"32dc9563-791b-421e-a807-41cc1e775b3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" Feb 19 15:34:08 crc kubenswrapper[4810]: I0219 15:34:08.161765 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fskpf\" (UniqueName: \"kubernetes.io/projected/32dc9563-791b-421e-a807-41cc1e775b3a-kube-api-access-fskpf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x748x\" (UID: \"32dc9563-791b-421e-a807-41cc1e775b3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" Feb 19 15:34:08 crc kubenswrapper[4810]: I0219 15:34:08.162006 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32dc9563-791b-421e-a807-41cc1e775b3a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x748x\" (UID: \"32dc9563-791b-421e-a807-41cc1e775b3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" Feb 19 15:34:08 crc kubenswrapper[4810]: I0219 15:34:08.170361 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32dc9563-791b-421e-a807-41cc1e775b3a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x748x\" (UID: \"32dc9563-791b-421e-a807-41cc1e775b3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" Feb 19 15:34:08 crc kubenswrapper[4810]: I0219 15:34:08.170527 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32dc9563-791b-421e-a807-41cc1e775b3a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x748x\" (UID: \"32dc9563-791b-421e-a807-41cc1e775b3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" Feb 19 15:34:08 crc kubenswrapper[4810]: I0219 15:34:08.192145 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fskpf\" (UniqueName: \"kubernetes.io/projected/32dc9563-791b-421e-a807-41cc1e775b3a-kube-api-access-fskpf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x748x\" (UID: \"32dc9563-791b-421e-a807-41cc1e775b3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" Feb 19 15:34:08 crc kubenswrapper[4810]: I0219 15:34:08.266137 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" Feb 19 15:34:08 crc kubenswrapper[4810]: I0219 15:34:08.832891 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x"] Feb 19 15:34:09 crc kubenswrapper[4810]: I0219 15:34:09.782441 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" event={"ID":"32dc9563-791b-421e-a807-41cc1e775b3a","Type":"ContainerStarted","Data":"0218f16e05c77fd4aff0c773c6f6f5f459d5e7eb7df2e492a757da51fe6de6ea"} Feb 19 15:34:09 crc kubenswrapper[4810]: I0219 15:34:09.782750 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" event={"ID":"32dc9563-791b-421e-a807-41cc1e775b3a","Type":"ContainerStarted","Data":"269910cad55a32d5b2042c87d986c67c8c7621ea6b376e19dc5e49dd23fda6f7"} Feb 19 15:34:09 crc kubenswrapper[4810]: I0219 15:34:09.818908 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" podStartSLOduration=2.369954075 podStartE2EDuration="2.818886909s" podCreationTimestamp="2026-02-19 15:34:07 +0000 UTC" firstStartedPulling="2026-02-19 15:34:08.832372043 +0000 UTC m=+1478.314402167" lastFinishedPulling="2026-02-19 15:34:09.281304867 +0000 UTC m=+1478.763335001" observedRunningTime="2026-02-19 15:34:09.80626955 +0000 UTC m=+1479.288299674" watchObservedRunningTime="2026-02-19 15:34:09.818886909 +0000 UTC m=+1479.300917043" Feb 19 15:34:12 crc kubenswrapper[4810]: I0219 15:34:12.818731 4810 generic.go:334] "Generic (PLEG): container finished" podID="32dc9563-791b-421e-a807-41cc1e775b3a" containerID="0218f16e05c77fd4aff0c773c6f6f5f459d5e7eb7df2e492a757da51fe6de6ea" exitCode=0 Feb 19 15:34:12 crc kubenswrapper[4810]: I0219 15:34:12.818795 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" event={"ID":"32dc9563-791b-421e-a807-41cc1e775b3a","Type":"ContainerDied","Data":"0218f16e05c77fd4aff0c773c6f6f5f459d5e7eb7df2e492a757da51fe6de6ea"} Feb 19 15:34:14 crc kubenswrapper[4810]: I0219 15:34:14.263405 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" Feb 19 15:34:14 crc kubenswrapper[4810]: I0219 15:34:14.409268 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32dc9563-791b-421e-a807-41cc1e775b3a-inventory\") pod \"32dc9563-791b-421e-a807-41cc1e775b3a\" (UID: \"32dc9563-791b-421e-a807-41cc1e775b3a\") " Feb 19 15:34:14 crc kubenswrapper[4810]: I0219 15:34:14.409498 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32dc9563-791b-421e-a807-41cc1e775b3a-ssh-key-openstack-edpm-ipam\") pod \"32dc9563-791b-421e-a807-41cc1e775b3a\" (UID: \"32dc9563-791b-421e-a807-41cc1e775b3a\") " Feb 19 15:34:14 crc kubenswrapper[4810]: I0219 15:34:14.409661 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fskpf\" (UniqueName: \"kubernetes.io/projected/32dc9563-791b-421e-a807-41cc1e775b3a-kube-api-access-fskpf\") pod \"32dc9563-791b-421e-a807-41cc1e775b3a\" (UID: \"32dc9563-791b-421e-a807-41cc1e775b3a\") " Feb 19 15:34:14 crc kubenswrapper[4810]: I0219 15:34:14.417059 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32dc9563-791b-421e-a807-41cc1e775b3a-kube-api-access-fskpf" (OuterVolumeSpecName: "kube-api-access-fskpf") pod "32dc9563-791b-421e-a807-41cc1e775b3a" (UID: "32dc9563-791b-421e-a807-41cc1e775b3a"). InnerVolumeSpecName "kube-api-access-fskpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:34:14 crc kubenswrapper[4810]: I0219 15:34:14.445256 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32dc9563-791b-421e-a807-41cc1e775b3a-inventory" (OuterVolumeSpecName: "inventory") pod "32dc9563-791b-421e-a807-41cc1e775b3a" (UID: "32dc9563-791b-421e-a807-41cc1e775b3a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:34:14 crc kubenswrapper[4810]: I0219 15:34:14.451619 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32dc9563-791b-421e-a807-41cc1e775b3a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "32dc9563-791b-421e-a807-41cc1e775b3a" (UID: "32dc9563-791b-421e-a807-41cc1e775b3a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:34:14 crc kubenswrapper[4810]: I0219 15:34:14.512882 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32dc9563-791b-421e-a807-41cc1e775b3a-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:34:14 crc kubenswrapper[4810]: I0219 15:34:14.516129 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32dc9563-791b-421e-a807-41cc1e775b3a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:34:14 crc kubenswrapper[4810]: I0219 15:34:14.516618 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fskpf\" (UniqueName: \"kubernetes.io/projected/32dc9563-791b-421e-a807-41cc1e775b3a-kube-api-access-fskpf\") on node \"crc\" DevicePath \"\"" Feb 19 15:34:14 crc kubenswrapper[4810]: I0219 15:34:14.848300 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" event={"ID":"32dc9563-791b-421e-a807-41cc1e775b3a","Type":"ContainerDied","Data":"269910cad55a32d5b2042c87d986c67c8c7621ea6b376e19dc5e49dd23fda6f7"} Feb 19 15:34:14 crc kubenswrapper[4810]: I0219 15:34:14.848358 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="269910cad55a32d5b2042c87d986c67c8c7621ea6b376e19dc5e49dd23fda6f7" Feb 19 15:34:14 crc kubenswrapper[4810]: I0219 15:34:14.848398 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x748x" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.050252 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx"] Feb 19 15:34:15 crc kubenswrapper[4810]: E0219 15:34:15.050761 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32dc9563-791b-421e-a807-41cc1e775b3a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.050790 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="32dc9563-791b-421e-a807-41cc1e775b3a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.051055 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="32dc9563-791b-421e-a807-41cc1e775b3a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.051806 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.061752 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.061918 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.062028 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.062192 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.078029 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx"] Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.128695 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.128787 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.128841 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwflq\" (UniqueName: \"kubernetes.io/projected/c4a9ca21-e1c7-490d-8078-14407b530301-kube-api-access-rwflq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.128877 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.230617 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.230699 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwflq\" (UniqueName: \"kubernetes.io/projected/c4a9ca21-e1c7-490d-8078-14407b530301-kube-api-access-rwflq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.230738 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.230895 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.235808 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.235885 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.237010 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.252756 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwflq\" (UniqueName: \"kubernetes.io/projected/c4a9ca21-e1c7-490d-8078-14407b530301-kube-api-access-rwflq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.381780 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:34:15 crc kubenswrapper[4810]: I0219 15:34:15.950016 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx"] Feb 19 15:34:16 crc kubenswrapper[4810]: I0219 15:34:16.870587 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" event={"ID":"c4a9ca21-e1c7-490d-8078-14407b530301","Type":"ContainerStarted","Data":"aea7e9dbe67f144df792c12f2c4b232cda1fd424cee70ce7f1f3b4844bd41e5c"} Feb 19 15:34:16 crc kubenswrapper[4810]: I0219 15:34:16.870864 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" event={"ID":"c4a9ca21-e1c7-490d-8078-14407b530301","Type":"ContainerStarted","Data":"aa1bc9f36577e42d99564c9f4bae7a23f2588d18bc7dc16da1d066acf3ad1da1"} Feb 19 15:34:16 crc kubenswrapper[4810]: I0219 15:34:16.903571 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" podStartSLOduration=1.418523432 podStartE2EDuration="1.903544378s" podCreationTimestamp="2026-02-19 15:34:15 +0000 UTC" firstStartedPulling="2026-02-19 15:34:15.948595809 +0000 UTC m=+1485.430625933" lastFinishedPulling="2026-02-19 15:34:16.433616745 +0000 UTC m=+1485.915646879" observedRunningTime="2026-02-19 15:34:16.894015198 +0000 UTC m=+1486.376045322" watchObservedRunningTime="2026-02-19 15:34:16.903544378 +0000 UTC m=+1486.385574522" Feb 19 15:34:39 crc kubenswrapper[4810]: I0219 15:34:39.561727 4810 scope.go:117] "RemoveContainer" containerID="c96da79c9ab27a3dc86e77ea8607bc39b965b2f04ca64ded9b1c4a74386d352e" Feb 19 15:34:39 crc kubenswrapper[4810]: I0219 15:34:39.603881 4810 scope.go:117] "RemoveContainer" containerID="99c27801bb39f1082a20de11443ab5b4c03a227dc67e8dce6456d77eb0a7c2db" Feb 19 15:34:39 crc kubenswrapper[4810]: I0219 15:34:39.689463 4810 scope.go:117] "RemoveContainer" containerID="930ffde39b4c3d7913e11cb429594ee8cd480971fa345e4d0d06f707706f3472" Feb 19 15:34:49 crc kubenswrapper[4810]: I0219 15:34:49.537816 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:34:49 crc kubenswrapper[4810]: I0219 15:34:49.538571 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:35:19 crc kubenswrapper[4810]: I0219 15:35:19.537545 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:35:19 crc kubenswrapper[4810]: I0219 15:35:19.538151 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:35:33 crc kubenswrapper[4810]: I0219 15:35:33.766123 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v5gm5"] Feb 19 15:35:33 crc kubenswrapper[4810]: I0219 15:35:33.769980 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5gm5"] Feb 19 15:35:33 crc kubenswrapper[4810]: I0219 15:35:33.770122 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:33 crc kubenswrapper[4810]: I0219 15:35:33.850242 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9786edb-c71c-4700-824b-2e348c15b77f-utilities\") pod \"community-operators-v5gm5\" (UID: \"c9786edb-c71c-4700-824b-2e348c15b77f\") " pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:33 crc kubenswrapper[4810]: I0219 15:35:33.850309 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4wvh\" (UniqueName: \"kubernetes.io/projected/c9786edb-c71c-4700-824b-2e348c15b77f-kube-api-access-q4wvh\") pod \"community-operators-v5gm5\" (UID: \"c9786edb-c71c-4700-824b-2e348c15b77f\") " pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:33 crc kubenswrapper[4810]: I0219 15:35:33.850433 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9786edb-c71c-4700-824b-2e348c15b77f-catalog-content\") pod \"community-operators-v5gm5\" (UID: \"c9786edb-c71c-4700-824b-2e348c15b77f\") " pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:33 crc kubenswrapper[4810]: I0219 15:35:33.952520 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9786edb-c71c-4700-824b-2e348c15b77f-utilities\") pod \"community-operators-v5gm5\" (UID: \"c9786edb-c71c-4700-824b-2e348c15b77f\") " pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:33 crc kubenswrapper[4810]: I0219 15:35:33.952591 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4wvh\" (UniqueName: \"kubernetes.io/projected/c9786edb-c71c-4700-824b-2e348c15b77f-kube-api-access-q4wvh\") pod \"community-operators-v5gm5\" (UID: \"c9786edb-c71c-4700-824b-2e348c15b77f\") " pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:33 crc kubenswrapper[4810]: I0219 15:35:33.952620 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9786edb-c71c-4700-824b-2e348c15b77f-catalog-content\") pod \"community-operators-v5gm5\" (UID: \"c9786edb-c71c-4700-824b-2e348c15b77f\") " pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:33 crc kubenswrapper[4810]: I0219 15:35:33.953182 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9786edb-c71c-4700-824b-2e348c15b77f-utilities\") pod \"community-operators-v5gm5\" (UID: \"c9786edb-c71c-4700-824b-2e348c15b77f\") " pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:33 crc kubenswrapper[4810]: I0219 15:35:33.953212 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9786edb-c71c-4700-824b-2e348c15b77f-catalog-content\") pod \"community-operators-v5gm5\" (UID: \"c9786edb-c71c-4700-824b-2e348c15b77f\") " pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:33 crc kubenswrapper[4810]: I0219 15:35:33.974542 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4wvh\" (UniqueName: \"kubernetes.io/projected/c9786edb-c71c-4700-824b-2e348c15b77f-kube-api-access-q4wvh\") pod \"community-operators-v5gm5\" (UID: \"c9786edb-c71c-4700-824b-2e348c15b77f\") " pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:34 crc kubenswrapper[4810]: I0219 15:35:34.114817 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:34 crc kubenswrapper[4810]: I0219 15:35:34.639575 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5gm5"] Feb 19 15:35:34 crc kubenswrapper[4810]: W0219 15:35:34.642695 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9786edb_c71c_4700_824b_2e348c15b77f.slice/crio-5e16d58b749a7926e83d0cd1c29f80582242aa5e44d6f95001df47fc5609615f WatchSource:0}: Error finding container 5e16d58b749a7926e83d0cd1c29f80582242aa5e44d6f95001df47fc5609615f: Status 404 returned error can't find the container with id 5e16d58b749a7926e83d0cd1c29f80582242aa5e44d6f95001df47fc5609615f Feb 19 15:35:34 crc kubenswrapper[4810]: I0219 15:35:34.729921 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5gm5" event={"ID":"c9786edb-c71c-4700-824b-2e348c15b77f","Type":"ContainerStarted","Data":"5e16d58b749a7926e83d0cd1c29f80582242aa5e44d6f95001df47fc5609615f"} Feb 19 15:35:35 crc kubenswrapper[4810]: I0219 15:35:35.739085 4810 generic.go:334] "Generic (PLEG): container finished" podID="c9786edb-c71c-4700-824b-2e348c15b77f" containerID="f42d7f9a1230992f8bd9c5e66cc486a27519984e9110e6a5a352dd034567457d" exitCode=0 Feb 19 15:35:35 crc kubenswrapper[4810]: I0219 15:35:35.739150 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5gm5" event={"ID":"c9786edb-c71c-4700-824b-2e348c15b77f","Type":"ContainerDied","Data":"f42d7f9a1230992f8bd9c5e66cc486a27519984e9110e6a5a352dd034567457d"} Feb 19 15:35:36 crc kubenswrapper[4810]: I0219 15:35:36.750712 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5gm5" event={"ID":"c9786edb-c71c-4700-824b-2e348c15b77f","Type":"ContainerStarted","Data":"8cb0bdc04704794d8377332b63549fd9711385b8e8633586279d263c62b2b764"} Feb 19 15:35:37 crc kubenswrapper[4810]: I0219 15:35:37.761067 4810 generic.go:334] "Generic (PLEG): container finished" podID="c9786edb-c71c-4700-824b-2e348c15b77f" containerID="8cb0bdc04704794d8377332b63549fd9711385b8e8633586279d263c62b2b764" exitCode=0 Feb 19 15:35:37 crc kubenswrapper[4810]: I0219 15:35:37.761163 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5gm5" event={"ID":"c9786edb-c71c-4700-824b-2e348c15b77f","Type":"ContainerDied","Data":"8cb0bdc04704794d8377332b63549fd9711385b8e8633586279d263c62b2b764"} Feb 19 15:35:38 crc kubenswrapper[4810]: I0219 15:35:38.776527 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5gm5" event={"ID":"c9786edb-c71c-4700-824b-2e348c15b77f","Type":"ContainerStarted","Data":"2336427e8f03e8aaf227aa7c6fe8af824454db5666683d65d337e84a41da9dd8"} Feb 19 15:35:38 crc kubenswrapper[4810]: I0219 15:35:38.813982 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v5gm5" podStartSLOduration=3.371785586 podStartE2EDuration="5.813948995s" podCreationTimestamp="2026-02-19 15:35:33 +0000 UTC" firstStartedPulling="2026-02-19 15:35:35.740893595 +0000 UTC m=+1565.222923709" lastFinishedPulling="2026-02-19 15:35:38.183056994 +0000 UTC m=+1567.665087118" observedRunningTime="2026-02-19 15:35:38.794795807 +0000 UTC m=+1568.276825981" watchObservedRunningTime="2026-02-19 15:35:38.813948995 +0000 UTC m=+1568.295979159" Feb 19 15:35:39 crc kubenswrapper[4810]: I0219 15:35:39.830228 4810 scope.go:117] "RemoveContainer" containerID="5e6853b5e9878d2a03e52d891d8a223a4096ce551b9935e29c1d5c5f37ac41cb" Feb 19 15:35:39 crc kubenswrapper[4810]: I0219 15:35:39.867889 4810 scope.go:117] "RemoveContainer" containerID="c6f5d9b5a6b15c45dc5d760616281a4656e6d239a89c530245a55061c13bc709" Feb 19 15:35:39 crc kubenswrapper[4810]: I0219 15:35:39.891126 4810 scope.go:117] "RemoveContainer" containerID="e36236dacc44e9719f0a5616b325da89fd715c826d97ed6b2c660301840187d2" Feb 19 15:35:39 crc kubenswrapper[4810]: I0219 15:35:39.937089 4810 scope.go:117] "RemoveContainer" containerID="d88c68d698d3972f0825f45fc0f2b6d882a24f69749acf74ad5f3d90f016e7f9" Feb 19 15:35:44 crc kubenswrapper[4810]: I0219 15:35:44.115950 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:44 crc kubenswrapper[4810]: I0219 15:35:44.116511 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:44 crc kubenswrapper[4810]: I0219 15:35:44.186254 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:44 crc kubenswrapper[4810]: I0219 15:35:44.944181 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:45 crc kubenswrapper[4810]: I0219 15:35:45.053352 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5gm5"] Feb 19 15:35:46 crc kubenswrapper[4810]: I0219 15:35:46.864982 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v5gm5" podUID="c9786edb-c71c-4700-824b-2e348c15b77f" containerName="registry-server" containerID="cri-o://2336427e8f03e8aaf227aa7c6fe8af824454db5666683d65d337e84a41da9dd8" gracePeriod=2 Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.376044 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.432477 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4wvh\" (UniqueName: \"kubernetes.io/projected/c9786edb-c71c-4700-824b-2e348c15b77f-kube-api-access-q4wvh\") pod \"c9786edb-c71c-4700-824b-2e348c15b77f\" (UID: \"c9786edb-c71c-4700-824b-2e348c15b77f\") " Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.432724 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9786edb-c71c-4700-824b-2e348c15b77f-utilities\") pod \"c9786edb-c71c-4700-824b-2e348c15b77f\" (UID: \"c9786edb-c71c-4700-824b-2e348c15b77f\") " Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.432915 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9786edb-c71c-4700-824b-2e348c15b77f-catalog-content\") pod \"c9786edb-c71c-4700-824b-2e348c15b77f\" (UID: \"c9786edb-c71c-4700-824b-2e348c15b77f\") " Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.433528 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9786edb-c71c-4700-824b-2e348c15b77f-utilities" (OuterVolumeSpecName: "utilities") pod "c9786edb-c71c-4700-824b-2e348c15b77f" (UID: "c9786edb-c71c-4700-824b-2e348c15b77f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.455519 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9786edb-c71c-4700-824b-2e348c15b77f-kube-api-access-q4wvh" (OuterVolumeSpecName: "kube-api-access-q4wvh") pod "c9786edb-c71c-4700-824b-2e348c15b77f" (UID: "c9786edb-c71c-4700-824b-2e348c15b77f"). InnerVolumeSpecName "kube-api-access-q4wvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.481553 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9786edb-c71c-4700-824b-2e348c15b77f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9786edb-c71c-4700-824b-2e348c15b77f" (UID: "c9786edb-c71c-4700-824b-2e348c15b77f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.535490 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9786edb-c71c-4700-824b-2e348c15b77f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.535520 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4wvh\" (UniqueName: \"kubernetes.io/projected/c9786edb-c71c-4700-824b-2e348c15b77f-kube-api-access-q4wvh\") on node \"crc\" DevicePath \"\"" Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.535533 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9786edb-c71c-4700-824b-2e348c15b77f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.876889 4810 generic.go:334] "Generic (PLEG): container finished" podID="c9786edb-c71c-4700-824b-2e348c15b77f" containerID="2336427e8f03e8aaf227aa7c6fe8af824454db5666683d65d337e84a41da9dd8" exitCode=0 Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.876917 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5gm5" event={"ID":"c9786edb-c71c-4700-824b-2e348c15b77f","Type":"ContainerDied","Data":"2336427e8f03e8aaf227aa7c6fe8af824454db5666683d65d337e84a41da9dd8"} Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.877179 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5gm5" event={"ID":"c9786edb-c71c-4700-824b-2e348c15b77f","Type":"ContainerDied","Data":"5e16d58b749a7926e83d0cd1c29f80582242aa5e44d6f95001df47fc5609615f"} Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.877197 4810 scope.go:117] "RemoveContainer" containerID="2336427e8f03e8aaf227aa7c6fe8af824454db5666683d65d337e84a41da9dd8" Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.877031 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5gm5" Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.918310 4810 scope.go:117] "RemoveContainer" containerID="8cb0bdc04704794d8377332b63549fd9711385b8e8633586279d263c62b2b764" Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.936366 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5gm5"] Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.951570 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v5gm5"] Feb 19 15:35:47 crc kubenswrapper[4810]: I0219 15:35:47.955314 4810 scope.go:117] "RemoveContainer" containerID="f42d7f9a1230992f8bd9c5e66cc486a27519984e9110e6a5a352dd034567457d" Feb 19 15:35:48 crc kubenswrapper[4810]: I0219 15:35:48.003590 4810 scope.go:117] "RemoveContainer" containerID="2336427e8f03e8aaf227aa7c6fe8af824454db5666683d65d337e84a41da9dd8" Feb 19 15:35:48 crc kubenswrapper[4810]: E0219 15:35:48.004316 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2336427e8f03e8aaf227aa7c6fe8af824454db5666683d65d337e84a41da9dd8\": container with ID starting with 2336427e8f03e8aaf227aa7c6fe8af824454db5666683d65d337e84a41da9dd8 not found: ID does not exist" containerID="2336427e8f03e8aaf227aa7c6fe8af824454db5666683d65d337e84a41da9dd8" Feb 19 15:35:48 crc kubenswrapper[4810]: I0219 15:35:48.004411 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2336427e8f03e8aaf227aa7c6fe8af824454db5666683d65d337e84a41da9dd8"} err="failed to get container status \"2336427e8f03e8aaf227aa7c6fe8af824454db5666683d65d337e84a41da9dd8\": rpc error: code = NotFound desc = could not find container \"2336427e8f03e8aaf227aa7c6fe8af824454db5666683d65d337e84a41da9dd8\": container with ID starting with 2336427e8f03e8aaf227aa7c6fe8af824454db5666683d65d337e84a41da9dd8 not found: ID does not exist" Feb 19 15:35:48 crc kubenswrapper[4810]: I0219 15:35:48.004460 4810 scope.go:117] "RemoveContainer" containerID="8cb0bdc04704794d8377332b63549fd9711385b8e8633586279d263c62b2b764" Feb 19 15:35:48 crc kubenswrapper[4810]: E0219 15:35:48.005151 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cb0bdc04704794d8377332b63549fd9711385b8e8633586279d263c62b2b764\": container with ID starting with 8cb0bdc04704794d8377332b63549fd9711385b8e8633586279d263c62b2b764 not found: ID does not exist" containerID="8cb0bdc04704794d8377332b63549fd9711385b8e8633586279d263c62b2b764" Feb 19 15:35:48 crc kubenswrapper[4810]: I0219 15:35:48.005361 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb0bdc04704794d8377332b63549fd9711385b8e8633586279d263c62b2b764"} err="failed to get container status \"8cb0bdc04704794d8377332b63549fd9711385b8e8633586279d263c62b2b764\": rpc error: code = NotFound desc = could not find container \"8cb0bdc04704794d8377332b63549fd9711385b8e8633586279d263c62b2b764\": container with ID starting with 8cb0bdc04704794d8377332b63549fd9711385b8e8633586279d263c62b2b764 not found: ID does not exist" Feb 19 15:35:48 crc kubenswrapper[4810]: I0219 15:35:48.005525 4810 scope.go:117] "RemoveContainer" containerID="f42d7f9a1230992f8bd9c5e66cc486a27519984e9110e6a5a352dd034567457d" Feb 19 15:35:48 crc kubenswrapper[4810]: E0219 15:35:48.006161 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f42d7f9a1230992f8bd9c5e66cc486a27519984e9110e6a5a352dd034567457d\": container with ID starting with f42d7f9a1230992f8bd9c5e66cc486a27519984e9110e6a5a352dd034567457d not found: ID does not exist" containerID="f42d7f9a1230992f8bd9c5e66cc486a27519984e9110e6a5a352dd034567457d" Feb 19 15:35:48 crc kubenswrapper[4810]: I0219 15:35:48.006199 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f42d7f9a1230992f8bd9c5e66cc486a27519984e9110e6a5a352dd034567457d"} err="failed to get container status \"f42d7f9a1230992f8bd9c5e66cc486a27519984e9110e6a5a352dd034567457d\": rpc error: code = NotFound desc = could not find container \"f42d7f9a1230992f8bd9c5e66cc486a27519984e9110e6a5a352dd034567457d\": container with ID starting with f42d7f9a1230992f8bd9c5e66cc486a27519984e9110e6a5a352dd034567457d not found: ID does not exist" Feb 19 15:35:49 crc kubenswrapper[4810]: I0219 15:35:49.450475 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9786edb-c71c-4700-824b-2e348c15b77f" path="/var/lib/kubelet/pods/c9786edb-c71c-4700-824b-2e348c15b77f/volumes" Feb 19 15:35:49 crc kubenswrapper[4810]: I0219 15:35:49.537190 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:35:49 crc kubenswrapper[4810]: I0219 15:35:49.537232 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:35:49 crc kubenswrapper[4810]: I0219 15:35:49.537269 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:35:49 crc kubenswrapper[4810]: I0219 15:35:49.537983 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:35:49 crc kubenswrapper[4810]: I0219 15:35:49.538035 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" gracePeriod=600 Feb 19 15:35:49 crc kubenswrapper[4810]: E0219 15:35:49.661593 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:35:49 crc kubenswrapper[4810]: I0219 15:35:49.906254 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" exitCode=0 Feb 19 15:35:49 crc kubenswrapper[4810]: I0219 15:35:49.906299 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a"} Feb 19 15:35:49 crc kubenswrapper[4810]: I0219 15:35:49.906433 4810 scope.go:117] "RemoveContainer" containerID="2979159f3325af188cf73d374cfc4f7b1a64cb0be10361454a84d92914ce8075" Feb 19 15:35:49 crc kubenswrapper[4810]: I0219 15:35:49.907296 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:35:49 crc kubenswrapper[4810]: E0219 15:35:49.907664 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.267928 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hdrj4"] Feb 19 15:35:51 crc kubenswrapper[4810]: E0219 15:35:51.269263 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9786edb-c71c-4700-824b-2e348c15b77f" containerName="extract-utilities" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.269292 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9786edb-c71c-4700-824b-2e348c15b77f" containerName="extract-utilities" Feb 19 15:35:51 crc kubenswrapper[4810]: E0219 15:35:51.269428 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9786edb-c71c-4700-824b-2e348c15b77f" containerName="extract-content" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.269444 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9786edb-c71c-4700-824b-2e348c15b77f" containerName="extract-content" Feb 19 15:35:51 crc kubenswrapper[4810]: E0219 15:35:51.270105 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9786edb-c71c-4700-824b-2e348c15b77f" containerName="registry-server" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.270132 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9786edb-c71c-4700-824b-2e348c15b77f" containerName="registry-server" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.270580 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9786edb-c71c-4700-824b-2e348c15b77f" containerName="registry-server" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.273642 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.291213 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hdrj4"] Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.315009 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eda6aa7-94b5-4d12-95cb-76730652c627-catalog-content\") pod \"certified-operators-hdrj4\" (UID: \"6eda6aa7-94b5-4d12-95cb-76730652c627\") " pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.315070 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9qwd\" (UniqueName: \"kubernetes.io/projected/6eda6aa7-94b5-4d12-95cb-76730652c627-kube-api-access-c9qwd\") pod \"certified-operators-hdrj4\" (UID: \"6eda6aa7-94b5-4d12-95cb-76730652c627\") " pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.315098 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eda6aa7-94b5-4d12-95cb-76730652c627-utilities\") pod \"certified-operators-hdrj4\" (UID: \"6eda6aa7-94b5-4d12-95cb-76730652c627\") " pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.416976 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eda6aa7-94b5-4d12-95cb-76730652c627-catalog-content\") pod \"certified-operators-hdrj4\" (UID: \"6eda6aa7-94b5-4d12-95cb-76730652c627\") " pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.417028 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9qwd\" (UniqueName: \"kubernetes.io/projected/6eda6aa7-94b5-4d12-95cb-76730652c627-kube-api-access-c9qwd\") pod \"certified-operators-hdrj4\" (UID: \"6eda6aa7-94b5-4d12-95cb-76730652c627\") " pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.417047 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eda6aa7-94b5-4d12-95cb-76730652c627-utilities\") pod \"certified-operators-hdrj4\" (UID: \"6eda6aa7-94b5-4d12-95cb-76730652c627\") " pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.417559 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eda6aa7-94b5-4d12-95cb-76730652c627-catalog-content\") pod \"certified-operators-hdrj4\" (UID: \"6eda6aa7-94b5-4d12-95cb-76730652c627\") " pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.417632 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eda6aa7-94b5-4d12-95cb-76730652c627-utilities\") pod \"certified-operators-hdrj4\" (UID: \"6eda6aa7-94b5-4d12-95cb-76730652c627\") " pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.441408 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9qwd\" (UniqueName: \"kubernetes.io/projected/6eda6aa7-94b5-4d12-95cb-76730652c627-kube-api-access-c9qwd\") pod \"certified-operators-hdrj4\" (UID: \"6eda6aa7-94b5-4d12-95cb-76730652c627\") " pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:35:51 crc kubenswrapper[4810]: I0219 15:35:51.623657 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:35:52 crc kubenswrapper[4810]: I0219 15:35:52.177300 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hdrj4"] Feb 19 15:35:52 crc kubenswrapper[4810]: I0219 15:35:52.943918 4810 generic.go:334] "Generic (PLEG): container finished" podID="6eda6aa7-94b5-4d12-95cb-76730652c627" containerID="ef601d7cf00e17850747c711a62d9d231bcc24ddc7e66dc470bc4ddf81533ce1" exitCode=0 Feb 19 15:35:52 crc kubenswrapper[4810]: I0219 15:35:52.944170 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdrj4" event={"ID":"6eda6aa7-94b5-4d12-95cb-76730652c627","Type":"ContainerDied","Data":"ef601d7cf00e17850747c711a62d9d231bcc24ddc7e66dc470bc4ddf81533ce1"} Feb 19 15:35:52 crc kubenswrapper[4810]: I0219 15:35:52.944197 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdrj4" event={"ID":"6eda6aa7-94b5-4d12-95cb-76730652c627","Type":"ContainerStarted","Data":"c86e35abda59680ef98b6f72e106a7dadaf55f94a3193db1535501490c5c5b34"} Feb 19 15:35:54 crc kubenswrapper[4810]: I0219 15:35:54.967734 4810 generic.go:334] "Generic (PLEG): container finished" podID="6eda6aa7-94b5-4d12-95cb-76730652c627" containerID="d18557e50f207d86079b71b1c0dd4c5bae5091bff4ef8afa200c09cfca4e54b0" exitCode=0 Feb 19 15:35:54 crc kubenswrapper[4810]: I0219 15:35:54.967801 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdrj4" event={"ID":"6eda6aa7-94b5-4d12-95cb-76730652c627","Type":"ContainerDied","Data":"d18557e50f207d86079b71b1c0dd4c5bae5091bff4ef8afa200c09cfca4e54b0"} Feb 19 15:35:55 crc kubenswrapper[4810]: I0219 15:35:55.978538 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdrj4" event={"ID":"6eda6aa7-94b5-4d12-95cb-76730652c627","Type":"ContainerStarted","Data":"814f21a94bed7e5fadc18fb49db4f445e001e72a3d8768460c85ea077a8946fd"} Feb 19 15:35:56 crc kubenswrapper[4810]: I0219 15:35:56.005314 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hdrj4" podStartSLOduration=2.556993798 podStartE2EDuration="5.00529805s" podCreationTimestamp="2026-02-19 15:35:51 +0000 UTC" firstStartedPulling="2026-02-19 15:35:52.947076571 +0000 UTC m=+1582.429106695" lastFinishedPulling="2026-02-19 15:35:55.395380823 +0000 UTC m=+1584.877410947" observedRunningTime="2026-02-19 15:35:56.000864579 +0000 UTC m=+1585.482894723" watchObservedRunningTime="2026-02-19 15:35:56.00529805 +0000 UTC m=+1585.487328174" Feb 19 15:36:01 crc kubenswrapper[4810]: I0219 15:36:01.624268 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:36:01 crc kubenswrapper[4810]: I0219 15:36:01.625147 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:36:01 crc kubenswrapper[4810]: I0219 15:36:01.688835 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:36:02 crc kubenswrapper[4810]: I0219 15:36:02.113959 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:36:02 crc kubenswrapper[4810]: I0219 15:36:02.178750 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hdrj4"] Feb 19 15:36:04 crc kubenswrapper[4810]: I0219 15:36:04.072826 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hdrj4" podUID="6eda6aa7-94b5-4d12-95cb-76730652c627" containerName="registry-server" containerID="cri-o://814f21a94bed7e5fadc18fb49db4f445e001e72a3d8768460c85ea077a8946fd" gracePeriod=2 Feb 19 15:36:04 crc kubenswrapper[4810]: I0219 15:36:04.439568 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:36:04 crc kubenswrapper[4810]: E0219 15:36:04.439885 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:36:04 crc kubenswrapper[4810]: I0219 15:36:04.713963 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:36:04 crc kubenswrapper[4810]: I0219 15:36:04.796530 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9qwd\" (UniqueName: \"kubernetes.io/projected/6eda6aa7-94b5-4d12-95cb-76730652c627-kube-api-access-c9qwd\") pod \"6eda6aa7-94b5-4d12-95cb-76730652c627\" (UID: \"6eda6aa7-94b5-4d12-95cb-76730652c627\") " Feb 19 15:36:04 crc kubenswrapper[4810]: I0219 15:36:04.796587 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eda6aa7-94b5-4d12-95cb-76730652c627-catalog-content\") pod \"6eda6aa7-94b5-4d12-95cb-76730652c627\" (UID: \"6eda6aa7-94b5-4d12-95cb-76730652c627\") " Feb 19 15:36:04 crc kubenswrapper[4810]: I0219 15:36:04.796692 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eda6aa7-94b5-4d12-95cb-76730652c627-utilities\") pod \"6eda6aa7-94b5-4d12-95cb-76730652c627\" (UID: \"6eda6aa7-94b5-4d12-95cb-76730652c627\") " Feb 19 15:36:04 crc kubenswrapper[4810]: I0219 15:36:04.797642 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eda6aa7-94b5-4d12-95cb-76730652c627-utilities" (OuterVolumeSpecName: "utilities") pod "6eda6aa7-94b5-4d12-95cb-76730652c627" (UID: "6eda6aa7-94b5-4d12-95cb-76730652c627"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:36:04 crc kubenswrapper[4810]: I0219 15:36:04.809547 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eda6aa7-94b5-4d12-95cb-76730652c627-kube-api-access-c9qwd" (OuterVolumeSpecName: "kube-api-access-c9qwd") pod "6eda6aa7-94b5-4d12-95cb-76730652c627" (UID: "6eda6aa7-94b5-4d12-95cb-76730652c627"). InnerVolumeSpecName "kube-api-access-c9qwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:36:04 crc kubenswrapper[4810]: I0219 15:36:04.899562 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9qwd\" (UniqueName: \"kubernetes.io/projected/6eda6aa7-94b5-4d12-95cb-76730652c627-kube-api-access-c9qwd\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:04 crc kubenswrapper[4810]: I0219 15:36:04.899618 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eda6aa7-94b5-4d12-95cb-76730652c627-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.090477 4810 generic.go:334] "Generic (PLEG): container finished" podID="6eda6aa7-94b5-4d12-95cb-76730652c627" containerID="814f21a94bed7e5fadc18fb49db4f445e001e72a3d8768460c85ea077a8946fd" exitCode=0 Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.090671 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdrj4" event={"ID":"6eda6aa7-94b5-4d12-95cb-76730652c627","Type":"ContainerDied","Data":"814f21a94bed7e5fadc18fb49db4f445e001e72a3d8768460c85ea077a8946fd"} Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.090785 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdrj4" event={"ID":"6eda6aa7-94b5-4d12-95cb-76730652c627","Type":"ContainerDied","Data":"c86e35abda59680ef98b6f72e106a7dadaf55f94a3193db1535501490c5c5b34"} Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.090715 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdrj4" Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.090957 4810 scope.go:117] "RemoveContainer" containerID="814f21a94bed7e5fadc18fb49db4f445e001e72a3d8768460c85ea077a8946fd" Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.119599 4810 scope.go:117] "RemoveContainer" containerID="d18557e50f207d86079b71b1c0dd4c5bae5091bff4ef8afa200c09cfca4e54b0" Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.149957 4810 scope.go:117] "RemoveContainer" containerID="ef601d7cf00e17850747c711a62d9d231bcc24ddc7e66dc470bc4ddf81533ce1" Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.194768 4810 scope.go:117] "RemoveContainer" containerID="814f21a94bed7e5fadc18fb49db4f445e001e72a3d8768460c85ea077a8946fd" Feb 19 15:36:05 crc kubenswrapper[4810]: E0219 15:36:05.195290 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"814f21a94bed7e5fadc18fb49db4f445e001e72a3d8768460c85ea077a8946fd\": container with ID starting with 814f21a94bed7e5fadc18fb49db4f445e001e72a3d8768460c85ea077a8946fd not found: ID does not exist" containerID="814f21a94bed7e5fadc18fb49db4f445e001e72a3d8768460c85ea077a8946fd" Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.195375 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"814f21a94bed7e5fadc18fb49db4f445e001e72a3d8768460c85ea077a8946fd"} err="failed to get container status \"814f21a94bed7e5fadc18fb49db4f445e001e72a3d8768460c85ea077a8946fd\": rpc error: code = NotFound desc = could not find container \"814f21a94bed7e5fadc18fb49db4f445e001e72a3d8768460c85ea077a8946fd\": container with ID starting with 814f21a94bed7e5fadc18fb49db4f445e001e72a3d8768460c85ea077a8946fd not found: ID does not exist" Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.195415 4810 scope.go:117] "RemoveContainer" containerID="d18557e50f207d86079b71b1c0dd4c5bae5091bff4ef8afa200c09cfca4e54b0" Feb 19 15:36:05 crc kubenswrapper[4810]: E0219 15:36:05.196144 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d18557e50f207d86079b71b1c0dd4c5bae5091bff4ef8afa200c09cfca4e54b0\": container with ID starting with d18557e50f207d86079b71b1c0dd4c5bae5091bff4ef8afa200c09cfca4e54b0 not found: ID does not exist" containerID="d18557e50f207d86079b71b1c0dd4c5bae5091bff4ef8afa200c09cfca4e54b0" Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.196195 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d18557e50f207d86079b71b1c0dd4c5bae5091bff4ef8afa200c09cfca4e54b0"} err="failed to get container status \"d18557e50f207d86079b71b1c0dd4c5bae5091bff4ef8afa200c09cfca4e54b0\": rpc error: code = NotFound desc = could not find container \"d18557e50f207d86079b71b1c0dd4c5bae5091bff4ef8afa200c09cfca4e54b0\": container with ID starting with d18557e50f207d86079b71b1c0dd4c5bae5091bff4ef8afa200c09cfca4e54b0 not found: ID does not exist" Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.196227 4810 scope.go:117] "RemoveContainer" containerID="ef601d7cf00e17850747c711a62d9d231bcc24ddc7e66dc470bc4ddf81533ce1" Feb 19 15:36:05 crc kubenswrapper[4810]: E0219 15:36:05.197077 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef601d7cf00e17850747c711a62d9d231bcc24ddc7e66dc470bc4ddf81533ce1\": container with ID starting with ef601d7cf00e17850747c711a62d9d231bcc24ddc7e66dc470bc4ddf81533ce1 not found: ID does not exist" containerID="ef601d7cf00e17850747c711a62d9d231bcc24ddc7e66dc470bc4ddf81533ce1" Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.197119 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef601d7cf00e17850747c711a62d9d231bcc24ddc7e66dc470bc4ddf81533ce1"} err="failed to get container status \"ef601d7cf00e17850747c711a62d9d231bcc24ddc7e66dc470bc4ddf81533ce1\": rpc error: code = NotFound desc = could not find container \"ef601d7cf00e17850747c711a62d9d231bcc24ddc7e66dc470bc4ddf81533ce1\": container with ID starting with ef601d7cf00e17850747c711a62d9d231bcc24ddc7e66dc470bc4ddf81533ce1 not found: ID does not exist" Feb 19 15:36:05 crc kubenswrapper[4810]: I0219 15:36:05.953244 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eda6aa7-94b5-4d12-95cb-76730652c627-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6eda6aa7-94b5-4d12-95cb-76730652c627" (UID: "6eda6aa7-94b5-4d12-95cb-76730652c627"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:36:06 crc kubenswrapper[4810]: I0219 15:36:06.021963 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eda6aa7-94b5-4d12-95cb-76730652c627-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:36:06 crc kubenswrapper[4810]: I0219 15:36:06.028005 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hdrj4"] Feb 19 15:36:06 crc kubenswrapper[4810]: I0219 15:36:06.038545 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hdrj4"] Feb 19 15:36:07 crc kubenswrapper[4810]: I0219 15:36:07.453258 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eda6aa7-94b5-4d12-95cb-76730652c627" path="/var/lib/kubelet/pods/6eda6aa7-94b5-4d12-95cb-76730652c627/volumes" Feb 19 15:36:15 crc kubenswrapper[4810]: I0219 15:36:15.441491 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:36:15 crc kubenswrapper[4810]: E0219 15:36:15.444158 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:36:27 crc kubenswrapper[4810]: I0219 15:36:27.439665 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:36:27 crc kubenswrapper[4810]: E0219 15:36:27.440990 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:36:40 crc kubenswrapper[4810]: I0219 15:36:40.030426 4810 scope.go:117] "RemoveContainer" containerID="a0f55d5dfd4c1951d245770b89cf22415d60ecb97cf8c05e857fc4583af61f68" Feb 19 15:36:40 crc kubenswrapper[4810]: I0219 15:36:40.055071 4810 scope.go:117] "RemoveContainer" containerID="c249e977cefa0135aa004a3d9624e2b5787cc21f20239233e79602a670cf0acb" Feb 19 15:36:41 crc kubenswrapper[4810]: I0219 15:36:41.465575 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:36:41 crc kubenswrapper[4810]: E0219 15:36:41.466266 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:36:56 crc kubenswrapper[4810]: I0219 15:36:56.439773 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:36:56 crc kubenswrapper[4810]: E0219 15:36:56.440826 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:37:10 crc kubenswrapper[4810]: I0219 15:37:10.439935 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:37:10 crc kubenswrapper[4810]: E0219 15:37:10.441281 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:37:22 crc kubenswrapper[4810]: I0219 15:37:22.045273 4810 generic.go:334] "Generic (PLEG): container finished" podID="c4a9ca21-e1c7-490d-8078-14407b530301" containerID="aea7e9dbe67f144df792c12f2c4b232cda1fd424cee70ce7f1f3b4844bd41e5c" exitCode=0 Feb 19 15:37:22 crc kubenswrapper[4810]: I0219 15:37:22.045450 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" event={"ID":"c4a9ca21-e1c7-490d-8078-14407b530301","Type":"ContainerDied","Data":"aea7e9dbe67f144df792c12f2c4b232cda1fd424cee70ce7f1f3b4844bd41e5c"} Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.442204 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:37:23 crc kubenswrapper[4810]: E0219 15:37:23.442784 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.494745 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.526291 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwflq\" (UniqueName: \"kubernetes.io/projected/c4a9ca21-e1c7-490d-8078-14407b530301-kube-api-access-rwflq\") pod \"c4a9ca21-e1c7-490d-8078-14407b530301\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.526436 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-inventory\") pod \"c4a9ca21-e1c7-490d-8078-14407b530301\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.526580 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-bootstrap-combined-ca-bundle\") pod \"c4a9ca21-e1c7-490d-8078-14407b530301\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.526615 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-ssh-key-openstack-edpm-ipam\") pod \"c4a9ca21-e1c7-490d-8078-14407b530301\" (UID: \"c4a9ca21-e1c7-490d-8078-14407b530301\") " Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.533374 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c4a9ca21-e1c7-490d-8078-14407b530301" (UID: "c4a9ca21-e1c7-490d-8078-14407b530301"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.533616 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a9ca21-e1c7-490d-8078-14407b530301-kube-api-access-rwflq" (OuterVolumeSpecName: "kube-api-access-rwflq") pod "c4a9ca21-e1c7-490d-8078-14407b530301" (UID: "c4a9ca21-e1c7-490d-8078-14407b530301"). InnerVolumeSpecName "kube-api-access-rwflq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.563522 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c4a9ca21-e1c7-490d-8078-14407b530301" (UID: "c4a9ca21-e1c7-490d-8078-14407b530301"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.572284 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-inventory" (OuterVolumeSpecName: "inventory") pod "c4a9ca21-e1c7-490d-8078-14407b530301" (UID: "c4a9ca21-e1c7-490d-8078-14407b530301"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.629029 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwflq\" (UniqueName: \"kubernetes.io/projected/c4a9ca21-e1c7-490d-8078-14407b530301-kube-api-access-rwflq\") on node \"crc\" DevicePath \"\"" Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.629311 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.629341 4810 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:37:23 crc kubenswrapper[4810]: I0219 15:37:23.629358 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4a9ca21-e1c7-490d-8078-14407b530301-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.071412 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" event={"ID":"c4a9ca21-e1c7-490d-8078-14407b530301","Type":"ContainerDied","Data":"aa1bc9f36577e42d99564c9f4bae7a23f2588d18bc7dc16da1d066acf3ad1da1"} Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.071459 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.071471 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa1bc9f36577e42d99564c9f4bae7a23f2588d18bc7dc16da1d066acf3ad1da1" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.190938 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl"] Feb 19 15:37:24 crc kubenswrapper[4810]: E0219 15:37:24.191649 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eda6aa7-94b5-4d12-95cb-76730652c627" containerName="extract-content" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.191677 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eda6aa7-94b5-4d12-95cb-76730652c627" containerName="extract-content" Feb 19 15:37:24 crc kubenswrapper[4810]: E0219 15:37:24.191713 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eda6aa7-94b5-4d12-95cb-76730652c627" containerName="extract-utilities" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.191727 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eda6aa7-94b5-4d12-95cb-76730652c627" containerName="extract-utilities" Feb 19 15:37:24 crc kubenswrapper[4810]: E0219 15:37:24.191757 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eda6aa7-94b5-4d12-95cb-76730652c627" containerName="registry-server" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.191771 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eda6aa7-94b5-4d12-95cb-76730652c627" containerName="registry-server" Feb 19 15:37:24 crc kubenswrapper[4810]: E0219 15:37:24.191809 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a9ca21-e1c7-490d-8078-14407b530301" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.191823 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a9ca21-e1c7-490d-8078-14407b530301" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.192217 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a9ca21-e1c7-490d-8078-14407b530301" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.192260 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eda6aa7-94b5-4d12-95cb-76730652c627" containerName="registry-server" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.193349 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.203008 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.203021 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.203296 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.203612 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.207374 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl"] Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.242036 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzg4x\" (UniqueName: \"kubernetes.io/projected/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-kube-api-access-qzg4x\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-flmfl\" (UID: \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.242129 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-flmfl\" (UID: \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.242311 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-flmfl\" (UID: \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.344406 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzg4x\" (UniqueName: \"kubernetes.io/projected/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-kube-api-access-qzg4x\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-flmfl\" (UID: \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.344505 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-flmfl\" (UID: \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.344606 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-flmfl\" (UID: \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.349663 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-flmfl\" (UID: \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.349905 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-flmfl\" (UID: \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.373646 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzg4x\" (UniqueName: \"kubernetes.io/projected/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-kube-api-access-qzg4x\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-flmfl\" (UID: \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" Feb 19 15:37:24 crc kubenswrapper[4810]: I0219 15:37:24.551449 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" Feb 19 15:37:25 crc kubenswrapper[4810]: I0219 15:37:25.141264 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl"] Feb 19 15:37:25 crc kubenswrapper[4810]: W0219 15:37:25.143177 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6255c5c_26d4_421f_9156_1bdd2f5adcc6.slice/crio-df2d8f9f79616c4ab1d340863a47dd0003a7f33fda9bcbbe938ee53e102fa03b WatchSource:0}: Error finding container df2d8f9f79616c4ab1d340863a47dd0003a7f33fda9bcbbe938ee53e102fa03b: Status 404 returned error can't find the container with id df2d8f9f79616c4ab1d340863a47dd0003a7f33fda9bcbbe938ee53e102fa03b Feb 19 15:37:25 crc kubenswrapper[4810]: I0219 15:37:25.145689 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:37:26 crc kubenswrapper[4810]: I0219 15:37:26.101573 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" event={"ID":"e6255c5c-26d4-421f-9156-1bdd2f5adcc6","Type":"ContainerStarted","Data":"46b7253182b6dc0e64c2e59a4d275ed006838d41d039fd87a4c506ea7296776b"} Feb 19 15:37:26 crc kubenswrapper[4810]: I0219 15:37:26.102628 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" event={"ID":"e6255c5c-26d4-421f-9156-1bdd2f5adcc6","Type":"ContainerStarted","Data":"df2d8f9f79616c4ab1d340863a47dd0003a7f33fda9bcbbe938ee53e102fa03b"} Feb 19 15:37:26 crc kubenswrapper[4810]: I0219 15:37:26.131940 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" podStartSLOduration=1.661178058 podStartE2EDuration="2.131912687s" podCreationTimestamp="2026-02-19 15:37:24 +0000 UTC" firstStartedPulling="2026-02-19 15:37:25.145380112 +0000 UTC m=+1674.627410236" lastFinishedPulling="2026-02-19 15:37:25.616114741 +0000 UTC m=+1675.098144865" observedRunningTime="2026-02-19 15:37:26.125084206 +0000 UTC m=+1675.607114370" watchObservedRunningTime="2026-02-19 15:37:26.131912687 +0000 UTC m=+1675.613942851" Feb 19 15:37:34 crc kubenswrapper[4810]: I0219 15:37:34.439876 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:37:34 crc kubenswrapper[4810]: E0219 15:37:34.441051 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:37:40 crc kubenswrapper[4810]: I0219 15:37:40.204423 4810 scope.go:117] "RemoveContainer" containerID="67d7b1b2fe05fdf05040100d731bde7bfd4f1ee47f3a5b6f3aa77c49f45ebc0c" Feb 19 15:37:40 crc kubenswrapper[4810]: I0219 15:37:40.234694 4810 scope.go:117] "RemoveContainer" containerID="bf2b72419abc5b9d1dc324265338d9686d9c4dd72a0204301ce8a03dc9ab3fd8" Feb 19 15:37:46 crc kubenswrapper[4810]: I0219 15:37:46.439649 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:37:46 crc kubenswrapper[4810]: E0219 15:37:46.441136 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:37:59 crc kubenswrapper[4810]: I0219 15:37:59.440164 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:37:59 crc kubenswrapper[4810]: E0219 15:37:59.440983 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:38:03 crc kubenswrapper[4810]: I0219 15:38:03.079303 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-pd6hg"] Feb 19 15:38:03 crc kubenswrapper[4810]: I0219 15:38:03.095475 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-pd6hg"] Feb 19 15:38:03 crc kubenswrapper[4810]: I0219 15:38:03.454054 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2222b6ca-79cd-48d7-b262-87e5cd4db6b1" path="/var/lib/kubelet/pods/2222b6ca-79cd-48d7-b262-87e5cd4db6b1/volumes" Feb 19 15:38:04 crc kubenswrapper[4810]: I0219 15:38:04.034836 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-79a4-account-create-update-mrm9x"] Feb 19 15:38:04 crc kubenswrapper[4810]: I0219 15:38:04.044138 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-79a4-account-create-update-mrm9x"] Feb 19 15:38:05 crc kubenswrapper[4810]: I0219 15:38:05.048522 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hm7ql"] Feb 19 15:38:05 crc kubenswrapper[4810]: I0219 15:38:05.069671 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hm7ql"] Feb 19 15:38:05 crc kubenswrapper[4810]: I0219 15:38:05.082864 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8fq2p"] Feb 19 15:38:05 crc kubenswrapper[4810]: I0219 15:38:05.096581 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c263-account-create-update-wz7k6"] Feb 19 15:38:05 crc kubenswrapper[4810]: I0219 15:38:05.108362 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8fq2p"] Feb 19 15:38:05 crc kubenswrapper[4810]: I0219 15:38:05.118845 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c263-account-create-update-wz7k6"] Feb 19 15:38:05 crc kubenswrapper[4810]: I0219 15:38:05.463461 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b6040a1-1df6-44da-ba23-4c7b1ccf17b1" path="/var/lib/kubelet/pods/8b6040a1-1df6-44da-ba23-4c7b1ccf17b1/volumes" Feb 19 15:38:05 crc kubenswrapper[4810]: I0219 15:38:05.466364 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d8e48ca-d504-48a9-9e92-97651cd15d28" path="/var/lib/kubelet/pods/9d8e48ca-d504-48a9-9e92-97651cd15d28/volumes" Feb 19 15:38:05 crc kubenswrapper[4810]: I0219 15:38:05.467603 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b22749-0497-48c2-b943-2c48aef05707" path="/var/lib/kubelet/pods/b4b22749-0497-48c2-b943-2c48aef05707/volumes" Feb 19 15:38:05 crc kubenswrapper[4810]: I0219 15:38:05.468891 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca24a94-16a8-4b5b-9d99-bc98919feb21" path="/var/lib/kubelet/pods/bca24a94-16a8-4b5b-9d99-bc98919feb21/volumes" Feb 19 15:38:06 crc kubenswrapper[4810]: I0219 15:38:06.048067 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-002d-account-create-update-6kk29"] Feb 19 15:38:06 crc kubenswrapper[4810]: I0219 15:38:06.062497 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-ffjjc"] Feb 19 15:38:06 crc kubenswrapper[4810]: I0219 15:38:06.073296 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-a42d-account-create-update-l2kgw"] Feb 19 15:38:06 crc kubenswrapper[4810]: I0219 15:38:06.084976 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-002d-account-create-update-6kk29"] Feb 19 15:38:06 crc kubenswrapper[4810]: I0219 15:38:06.096002 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-ffjjc"] Feb 19 15:38:06 crc kubenswrapper[4810]: I0219 15:38:06.107222 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-a42d-account-create-update-l2kgw"] Feb 19 15:38:07 crc kubenswrapper[4810]: I0219 15:38:07.456137 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21eb5702-ca94-449c-839c-e3970593417d" path="/var/lib/kubelet/pods/21eb5702-ca94-449c-839c-e3970593417d/volumes" Feb 19 15:38:07 crc kubenswrapper[4810]: I0219 15:38:07.457020 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f791f64-69f8-448d-8370-aeef0db30071" path="/var/lib/kubelet/pods/7f791f64-69f8-448d-8370-aeef0db30071/volumes" Feb 19 15:38:07 crc kubenswrapper[4810]: I0219 15:38:07.457546 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c141144d-36a6-4c0c-b764-d7453c101ea3" path="/var/lib/kubelet/pods/c141144d-36a6-4c0c-b764-d7453c101ea3/volumes" Feb 19 15:38:12 crc kubenswrapper[4810]: I0219 15:38:12.050669 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j78h8"] Feb 19 15:38:12 crc kubenswrapper[4810]: I0219 15:38:12.065174 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-j78h8"] Feb 19 15:38:13 crc kubenswrapper[4810]: I0219 15:38:13.456833 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e12a1f8-d78c-41b0-b295-e5e661bf0820" path="/var/lib/kubelet/pods/8e12a1f8-d78c-41b0-b295-e5e661bf0820/volumes" Feb 19 15:38:14 crc kubenswrapper[4810]: I0219 15:38:14.439947 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:38:14 crc kubenswrapper[4810]: E0219 15:38:14.440646 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:38:29 crc kubenswrapper[4810]: I0219 15:38:29.440579 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:38:29 crc kubenswrapper[4810]: E0219 15:38:29.441943 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:38:32 crc kubenswrapper[4810]: I0219 15:38:32.060152 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-9wtnf"] Feb 19 15:38:32 crc kubenswrapper[4810]: I0219 15:38:32.075766 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-9wtnf"] Feb 19 15:38:33 crc kubenswrapper[4810]: I0219 15:38:33.478479 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6217aad-07e6-49b6-8e80-41e75cecaaf5" path="/var/lib/kubelet/pods/a6217aad-07e6-49b6-8e80-41e75cecaaf5/volumes" Feb 19 15:38:35 crc kubenswrapper[4810]: I0219 15:38:35.054484 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-9cp2h"] Feb 19 15:38:35 crc kubenswrapper[4810]: I0219 15:38:35.073739 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dc04-account-create-update-dmf9z"] Feb 19 15:38:35 crc kubenswrapper[4810]: I0219 15:38:35.092606 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-9cp2h"] Feb 19 15:38:35 crc kubenswrapper[4810]: I0219 15:38:35.105302 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dc04-account-create-update-dmf9z"] Feb 19 15:38:35 crc kubenswrapper[4810]: I0219 15:38:35.451415 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b15da5-701a-492a-b986-99b767d2876c" path="/var/lib/kubelet/pods/05b15da5-701a-492a-b986-99b767d2876c/volumes" Feb 19 15:38:35 crc kubenswrapper[4810]: I0219 15:38:35.452529 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="700fd144-e077-4468-80a4-f131fdb9d67e" path="/var/lib/kubelet/pods/700fd144-e077-4468-80a4-f131fdb9d67e/volumes" Feb 19 15:38:38 crc kubenswrapper[4810]: I0219 15:38:38.027564 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8ce6-account-create-update-ztxw9"] Feb 19 15:38:38 crc kubenswrapper[4810]: I0219 15:38:38.038868 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8ce6-account-create-update-ztxw9"] Feb 19 15:38:39 crc kubenswrapper[4810]: I0219 15:38:39.034420 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-43b3-account-create-update-6gqq5"] Feb 19 15:38:39 crc kubenswrapper[4810]: I0219 15:38:39.050701 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kpf4t"] Feb 19 15:38:39 crc kubenswrapper[4810]: I0219 15:38:39.063200 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ptwwk"] Feb 19 15:38:39 crc kubenswrapper[4810]: I0219 15:38:39.072242 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kpf4t"] Feb 19 15:38:39 crc kubenswrapper[4810]: I0219 15:38:39.079967 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-43b3-account-create-update-6gqq5"] Feb 19 15:38:39 crc kubenswrapper[4810]: I0219 15:38:39.088638 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ptwwk"] Feb 19 15:38:39 crc kubenswrapper[4810]: I0219 15:38:39.449479 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31093793-65b6-467c-8d5b-218e108fd330" path="/var/lib/kubelet/pods/31093793-65b6-467c-8d5b-218e108fd330/volumes" Feb 19 15:38:39 crc kubenswrapper[4810]: I0219 15:38:39.450176 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5770188c-7480-4529-8450-3d1a44cf50d6" path="/var/lib/kubelet/pods/5770188c-7480-4529-8450-3d1a44cf50d6/volumes" Feb 19 15:38:39 crc kubenswrapper[4810]: I0219 15:38:39.450788 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ac347c6-4f1b-4b05-87a0-9332dec2ba9d" path="/var/lib/kubelet/pods/5ac347c6-4f1b-4b05-87a0-9332dec2ba9d/volumes" Feb 19 15:38:39 crc kubenswrapper[4810]: I0219 15:38:39.451367 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c7665c-330a-45b8-b461-bd08b069b747" path="/var/lib/kubelet/pods/c2c7665c-330a-45b8-b461-bd08b069b747/volumes" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.307422 4810 scope.go:117] "RemoveContainer" containerID="e9bec1a534d25c6a14471d437a183afe877fd56061c47642147a9b763a2c6190" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.361074 4810 scope.go:117] "RemoveContainer" containerID="9704b5e429194139e41b388c5c38c38e001c096f5c05263f386a9f8220160ce9" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.409578 4810 scope.go:117] "RemoveContainer" containerID="235b187eb3bd473181cf1d8a9d02071a2d445e3841afa055bb60de833ffbcec1" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.452655 4810 scope.go:117] "RemoveContainer" containerID="e7fde16e8762a1bda91d4aa8dbb2846cb1cc1e9049ca68a05b5713d4c512a4b8" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.493132 4810 scope.go:117] "RemoveContainer" containerID="ffa1569c8787f552547599568a5b882194ae4226f8c9d82766a9a36d606eb91a" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.534070 4810 scope.go:117] "RemoveContainer" containerID="f70ca73c865d94282b27c6f5f6e86e7e4679dda8ba68c283e08b2a6314c29261" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.581561 4810 scope.go:117] "RemoveContainer" containerID="f1368987940841c90c15fd62f94998dda89194fc11c576179de536979c6adc82" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.609530 4810 scope.go:117] "RemoveContainer" containerID="58aa0a5c9039d1f2e2e2fa3a8464a2460e5ec3924d291301dda66cb2085c37f2" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.631778 4810 scope.go:117] "RemoveContainer" containerID="7503cb280210459a8b150bda1b5d65c5f4d10619291800104ee52fa8927bdb82" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.655510 4810 scope.go:117] "RemoveContainer" containerID="78c0fb5a6a2ddab1d7b49b378f905fccf1b07a5af8d34ee0f62b947801682e49" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.685553 4810 scope.go:117] "RemoveContainer" containerID="9f4b973758b7ab4df4daae42709ff686161b77974aebe961605bcdd7b7ee6895" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.716694 4810 scope.go:117] "RemoveContainer" containerID="683b765d3388918ee0690173c641c6f414e8fc77c164afb4ab566f37723b326b" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.742689 4810 scope.go:117] "RemoveContainer" containerID="72214a8edd0c54f3823201969c4eb1d1b241f1f9c89ed676fa59ee81e422993e" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.774904 4810 scope.go:117] "RemoveContainer" containerID="317f50bb910bab31d9c1242a97f9988671bee73e88d7e795833b2626793ec0c6" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.803494 4810 scope.go:117] "RemoveContainer" containerID="084e9f9fbe2e5f93a513bee51567c16a8dbaf61639fac09083f059ee237ae6a4" Feb 19 15:38:40 crc kubenswrapper[4810]: I0219 15:38:40.839951 4810 scope.go:117] "RemoveContainer" containerID="f13c00b75444d82ae151313db252a559d67eb3a9e93fc91fd59fa886fe8ada73" Feb 19 15:38:42 crc kubenswrapper[4810]: I0219 15:38:42.439103 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:38:42 crc kubenswrapper[4810]: E0219 15:38:42.439745 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:38:47 crc kubenswrapper[4810]: I0219 15:38:47.063169 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-sd4lr"] Feb 19 15:38:47 crc kubenswrapper[4810]: I0219 15:38:47.073398 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-sd4lr"] Feb 19 15:38:47 crc kubenswrapper[4810]: I0219 15:38:47.456919 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63eeb47c-9c4a-4e36-be24-61c126517600" path="/var/lib/kubelet/pods/63eeb47c-9c4a-4e36-be24-61c126517600/volumes" Feb 19 15:38:48 crc kubenswrapper[4810]: I0219 15:38:48.025979 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-k9zsz"] Feb 19 15:38:48 crc kubenswrapper[4810]: I0219 15:38:48.034632 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-k9zsz"] Feb 19 15:38:49 crc kubenswrapper[4810]: I0219 15:38:49.451668 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="082fc735-2850-452d-841a-0af9ed7ed171" path="/var/lib/kubelet/pods/082fc735-2850-452d-841a-0af9ed7ed171/volumes" Feb 19 15:38:56 crc kubenswrapper[4810]: I0219 15:38:56.439521 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:38:56 crc kubenswrapper[4810]: E0219 15:38:56.440599 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:39:06 crc kubenswrapper[4810]: I0219 15:39:06.264286 4810 generic.go:334] "Generic (PLEG): container finished" podID="e6255c5c-26d4-421f-9156-1bdd2f5adcc6" containerID="46b7253182b6dc0e64c2e59a4d275ed006838d41d039fd87a4c506ea7296776b" exitCode=0 Feb 19 15:39:06 crc kubenswrapper[4810]: I0219 15:39:06.264399 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" event={"ID":"e6255c5c-26d4-421f-9156-1bdd2f5adcc6","Type":"ContainerDied","Data":"46b7253182b6dc0e64c2e59a4d275ed006838d41d039fd87a4c506ea7296776b"} Feb 19 15:39:07 crc kubenswrapper[4810]: I0219 15:39:07.787248 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" Feb 19 15:39:07 crc kubenswrapper[4810]: I0219 15:39:07.892734 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-ssh-key-openstack-edpm-ipam\") pod \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\" (UID: \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\") " Feb 19 15:39:07 crc kubenswrapper[4810]: I0219 15:39:07.892832 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-inventory\") pod \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\" (UID: \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\") " Feb 19 15:39:07 crc kubenswrapper[4810]: I0219 15:39:07.893033 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzg4x\" (UniqueName: \"kubernetes.io/projected/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-kube-api-access-qzg4x\") pod \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\" (UID: \"e6255c5c-26d4-421f-9156-1bdd2f5adcc6\") " Feb 19 15:39:07 crc kubenswrapper[4810]: I0219 15:39:07.904593 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-kube-api-access-qzg4x" (OuterVolumeSpecName: "kube-api-access-qzg4x") pod "e6255c5c-26d4-421f-9156-1bdd2f5adcc6" (UID: "e6255c5c-26d4-421f-9156-1bdd2f5adcc6"). InnerVolumeSpecName "kube-api-access-qzg4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:39:07 crc kubenswrapper[4810]: I0219 15:39:07.921560 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-inventory" (OuterVolumeSpecName: "inventory") pod "e6255c5c-26d4-421f-9156-1bdd2f5adcc6" (UID: "e6255c5c-26d4-421f-9156-1bdd2f5adcc6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:39:07 crc kubenswrapper[4810]: I0219 15:39:07.923918 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e6255c5c-26d4-421f-9156-1bdd2f5adcc6" (UID: "e6255c5c-26d4-421f-9156-1bdd2f5adcc6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:39:07 crc kubenswrapper[4810]: I0219 15:39:07.998262 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzg4x\" (UniqueName: \"kubernetes.io/projected/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-kube-api-access-qzg4x\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:07 crc kubenswrapper[4810]: I0219 15:39:07.998317 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:07 crc kubenswrapper[4810]: I0219 15:39:07.998418 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6255c5c-26d4-421f-9156-1bdd2f5adcc6-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.288179 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" event={"ID":"e6255c5c-26d4-421f-9156-1bdd2f5adcc6","Type":"ContainerDied","Data":"df2d8f9f79616c4ab1d340863a47dd0003a7f33fda9bcbbe938ee53e102fa03b"} Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.288237 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df2d8f9f79616c4ab1d340863a47dd0003a7f33fda9bcbbe938ee53e102fa03b" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.288663 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-flmfl" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.400397 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498"] Feb 19 15:39:08 crc kubenswrapper[4810]: E0219 15:39:08.401223 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6255c5c-26d4-421f-9156-1bdd2f5adcc6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.401251 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6255c5c-26d4-421f-9156-1bdd2f5adcc6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.401592 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6255c5c-26d4-421f-9156-1bdd2f5adcc6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.402524 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.430082 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.430403 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.430469 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.430614 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.448071 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498"] Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.510784 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49znl\" (UniqueName: \"kubernetes.io/projected/2cff3a3e-0543-4fec-8f5b-5421be276386-kube-api-access-49znl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6q498\" (UID: \"2cff3a3e-0543-4fec-8f5b-5421be276386\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.510988 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cff3a3e-0543-4fec-8f5b-5421be276386-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6q498\" (UID: \"2cff3a3e-0543-4fec-8f5b-5421be276386\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.511024 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cff3a3e-0543-4fec-8f5b-5421be276386-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6q498\" (UID: \"2cff3a3e-0543-4fec-8f5b-5421be276386\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.613359 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49znl\" (UniqueName: \"kubernetes.io/projected/2cff3a3e-0543-4fec-8f5b-5421be276386-kube-api-access-49znl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6q498\" (UID: \"2cff3a3e-0543-4fec-8f5b-5421be276386\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.613456 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cff3a3e-0543-4fec-8f5b-5421be276386-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6q498\" (UID: \"2cff3a3e-0543-4fec-8f5b-5421be276386\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.613492 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cff3a3e-0543-4fec-8f5b-5421be276386-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6q498\" (UID: \"2cff3a3e-0543-4fec-8f5b-5421be276386\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.619478 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cff3a3e-0543-4fec-8f5b-5421be276386-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6q498\" (UID: \"2cff3a3e-0543-4fec-8f5b-5421be276386\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.629928 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cff3a3e-0543-4fec-8f5b-5421be276386-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6q498\" (UID: \"2cff3a3e-0543-4fec-8f5b-5421be276386\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.632180 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49znl\" (UniqueName: \"kubernetes.io/projected/2cff3a3e-0543-4fec-8f5b-5421be276386-kube-api-access-49znl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6q498\" (UID: \"2cff3a3e-0543-4fec-8f5b-5421be276386\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" Feb 19 15:39:08 crc kubenswrapper[4810]: I0219 15:39:08.743839 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" Feb 19 15:39:09 crc kubenswrapper[4810]: I0219 15:39:09.387609 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498"] Feb 19 15:39:10 crc kubenswrapper[4810]: I0219 15:39:10.308022 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" event={"ID":"2cff3a3e-0543-4fec-8f5b-5421be276386","Type":"ContainerStarted","Data":"b4e86a3f28595dacb736dcb71007d80706dd96818e80cc25e2a888dfcab09e96"} Feb 19 15:39:10 crc kubenswrapper[4810]: I0219 15:39:10.308651 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" event={"ID":"2cff3a3e-0543-4fec-8f5b-5421be276386","Type":"ContainerStarted","Data":"0dba2a07ad454eaa31c1496b450e78caabf96a5e17db799ef132576e5619dad4"} Feb 19 15:39:10 crc kubenswrapper[4810]: I0219 15:39:10.335900 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" podStartSLOduration=1.880465222 podStartE2EDuration="2.335885428s" podCreationTimestamp="2026-02-19 15:39:08 +0000 UTC" firstStartedPulling="2026-02-19 15:39:09.404369269 +0000 UTC m=+1778.886399413" lastFinishedPulling="2026-02-19 15:39:09.859789505 +0000 UTC m=+1779.341819619" observedRunningTime="2026-02-19 15:39:10.32917508 +0000 UTC m=+1779.811205204" watchObservedRunningTime="2026-02-19 15:39:10.335885428 +0000 UTC m=+1779.817915552" Feb 19 15:39:11 crc kubenswrapper[4810]: I0219 15:39:11.453714 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:39:11 crc kubenswrapper[4810]: E0219 15:39:11.454167 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:39:25 crc kubenswrapper[4810]: I0219 15:39:25.441517 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:39:25 crc kubenswrapper[4810]: E0219 15:39:25.444599 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:39:33 crc kubenswrapper[4810]: I0219 15:39:33.060475 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7jdcp"] Feb 19 15:39:33 crc kubenswrapper[4810]: I0219 15:39:33.070776 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-thnc7"] Feb 19 15:39:33 crc kubenswrapper[4810]: I0219 15:39:33.080918 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-j989d"] Feb 19 15:39:33 crc kubenswrapper[4810]: I0219 15:39:33.089884 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7jdcp"] Feb 19 15:39:33 crc kubenswrapper[4810]: I0219 15:39:33.098918 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-thnc7"] Feb 19 15:39:33 crc kubenswrapper[4810]: I0219 15:39:33.110890 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-j989d"] Feb 19 15:39:33 crc kubenswrapper[4810]: I0219 15:39:33.458951 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36fe6fdb-2970-4773-8184-a2d16b8ca89a" path="/var/lib/kubelet/pods/36fe6fdb-2970-4773-8184-a2d16b8ca89a/volumes" Feb 19 15:39:33 crc kubenswrapper[4810]: I0219 15:39:33.459660 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd5dede-cf58-43c7-954e-b9b1d33ad8d1" path="/var/lib/kubelet/pods/4dd5dede-cf58-43c7-954e-b9b1d33ad8d1/volumes" Feb 19 15:39:33 crc kubenswrapper[4810]: I0219 15:39:33.460298 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92797675-ddf7-43cf-90af-0248cf097509" path="/var/lib/kubelet/pods/92797675-ddf7-43cf-90af-0248cf097509/volumes" Feb 19 15:39:37 crc kubenswrapper[4810]: I0219 15:39:37.439659 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:39:37 crc kubenswrapper[4810]: E0219 15:39:37.440710 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:39:41 crc kubenswrapper[4810]: I0219 15:39:41.247447 4810 scope.go:117] "RemoveContainer" containerID="22a00ed65eebcb7030f20de212b927e0556118314908589176cea5b5329504cb" Feb 19 15:39:41 crc kubenswrapper[4810]: I0219 15:39:41.289611 4810 scope.go:117] "RemoveContainer" containerID="aa561f23770b052d6b320e47499c0a8789e25a7a2367b69634f88f903c8d780a" Feb 19 15:39:41 crc kubenswrapper[4810]: I0219 15:39:41.357055 4810 scope.go:117] "RemoveContainer" containerID="da258067ce2c7912909dc7c937b6ad45df02bf5c8504937ad6d6f0ea0359724a" Feb 19 15:39:41 crc kubenswrapper[4810]: I0219 15:39:41.403132 4810 scope.go:117] "RemoveContainer" containerID="7e83b0c5177b1183e58ad0498417fc1c3b6e142723e7482bda0235e4615b43f5" Feb 19 15:39:41 crc kubenswrapper[4810]: I0219 15:39:41.467364 4810 scope.go:117] "RemoveContainer" containerID="829a51aca23df8d8763078bcae4b4cba43b6c265996ab11fc55f6d42ce950516" Feb 19 15:39:46 crc kubenswrapper[4810]: I0219 15:39:46.037237 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-hmc6k"] Feb 19 15:39:46 crc kubenswrapper[4810]: I0219 15:39:46.050494 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-hmc6k"] Feb 19 15:39:47 crc kubenswrapper[4810]: I0219 15:39:47.476742 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2024a783-c3f9-4e57-b00f-52bec164e64e" path="/var/lib/kubelet/pods/2024a783-c3f9-4e57-b00f-52bec164e64e/volumes" Feb 19 15:39:49 crc kubenswrapper[4810]: I0219 15:39:49.046831 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-svmgl"] Feb 19 15:39:49 crc kubenswrapper[4810]: I0219 15:39:49.055914 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-svmgl"] Feb 19 15:39:49 crc kubenswrapper[4810]: I0219 15:39:49.438900 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:39:49 crc kubenswrapper[4810]: E0219 15:39:49.439140 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:39:49 crc kubenswrapper[4810]: I0219 15:39:49.448633 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="848dfe9d-05f4-4ba9-919e-23e9a7ae63d5" path="/var/lib/kubelet/pods/848dfe9d-05f4-4ba9-919e-23e9a7ae63d5/volumes" Feb 19 15:40:02 crc kubenswrapper[4810]: I0219 15:40:02.439481 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:40:02 crc kubenswrapper[4810]: E0219 15:40:02.440337 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:40:14 crc kubenswrapper[4810]: I0219 15:40:14.440123 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:40:14 crc kubenswrapper[4810]: E0219 15:40:14.440815 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:40:25 crc kubenswrapper[4810]: I0219 15:40:25.213590 4810 generic.go:334] "Generic (PLEG): container finished" podID="2cff3a3e-0543-4fec-8f5b-5421be276386" containerID="b4e86a3f28595dacb736dcb71007d80706dd96818e80cc25e2a888dfcab09e96" exitCode=0 Feb 19 15:40:25 crc kubenswrapper[4810]: I0219 15:40:25.213677 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" event={"ID":"2cff3a3e-0543-4fec-8f5b-5421be276386","Type":"ContainerDied","Data":"b4e86a3f28595dacb736dcb71007d80706dd96818e80cc25e2a888dfcab09e96"} Feb 19 15:40:25 crc kubenswrapper[4810]: I0219 15:40:25.441207 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:40:25 crc kubenswrapper[4810]: E0219 15:40:25.442041 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:40:26 crc kubenswrapper[4810]: I0219 15:40:26.686591 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" Feb 19 15:40:26 crc kubenswrapper[4810]: I0219 15:40:26.811541 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cff3a3e-0543-4fec-8f5b-5421be276386-inventory\") pod \"2cff3a3e-0543-4fec-8f5b-5421be276386\" (UID: \"2cff3a3e-0543-4fec-8f5b-5421be276386\") " Feb 19 15:40:26 crc kubenswrapper[4810]: I0219 15:40:26.811621 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49znl\" (UniqueName: \"kubernetes.io/projected/2cff3a3e-0543-4fec-8f5b-5421be276386-kube-api-access-49znl\") pod \"2cff3a3e-0543-4fec-8f5b-5421be276386\" (UID: \"2cff3a3e-0543-4fec-8f5b-5421be276386\") " Feb 19 15:40:26 crc kubenswrapper[4810]: I0219 15:40:26.811826 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cff3a3e-0543-4fec-8f5b-5421be276386-ssh-key-openstack-edpm-ipam\") pod \"2cff3a3e-0543-4fec-8f5b-5421be276386\" (UID: \"2cff3a3e-0543-4fec-8f5b-5421be276386\") " Feb 19 15:40:26 crc kubenswrapper[4810]: I0219 15:40:26.817706 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cff3a3e-0543-4fec-8f5b-5421be276386-kube-api-access-49znl" (OuterVolumeSpecName: "kube-api-access-49znl") pod "2cff3a3e-0543-4fec-8f5b-5421be276386" (UID: "2cff3a3e-0543-4fec-8f5b-5421be276386"). InnerVolumeSpecName "kube-api-access-49znl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:40:26 crc kubenswrapper[4810]: I0219 15:40:26.841481 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cff3a3e-0543-4fec-8f5b-5421be276386-inventory" (OuterVolumeSpecName: "inventory") pod "2cff3a3e-0543-4fec-8f5b-5421be276386" (UID: "2cff3a3e-0543-4fec-8f5b-5421be276386"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:40:26 crc kubenswrapper[4810]: I0219 15:40:26.848081 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cff3a3e-0543-4fec-8f5b-5421be276386-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2cff3a3e-0543-4fec-8f5b-5421be276386" (UID: "2cff3a3e-0543-4fec-8f5b-5421be276386"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:40:26 crc kubenswrapper[4810]: I0219 15:40:26.914217 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cff3a3e-0543-4fec-8f5b-5421be276386-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:40:26 crc kubenswrapper[4810]: I0219 15:40:26.914274 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49znl\" (UniqueName: \"kubernetes.io/projected/2cff3a3e-0543-4fec-8f5b-5421be276386-kube-api-access-49znl\") on node \"crc\" DevicePath \"\"" Feb 19 15:40:26 crc kubenswrapper[4810]: I0219 15:40:26.914295 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cff3a3e-0543-4fec-8f5b-5421be276386-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.239491 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" event={"ID":"2cff3a3e-0543-4fec-8f5b-5421be276386","Type":"ContainerDied","Data":"0dba2a07ad454eaa31c1496b450e78caabf96a5e17db799ef132576e5619dad4"} Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.239551 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dba2a07ad454eaa31c1496b450e78caabf96a5e17db799ef132576e5619dad4" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.239561 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6q498" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.340058 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7"] Feb 19 15:40:27 crc kubenswrapper[4810]: E0219 15:40:27.340729 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cff3a3e-0543-4fec-8f5b-5421be276386" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.340758 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cff3a3e-0543-4fec-8f5b-5421be276386" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.341039 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cff3a3e-0543-4fec-8f5b-5421be276386" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.341999 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.343833 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.343924 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.345129 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.345363 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.353737 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7"] Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.424825 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/412dc62a-d25e-4820-947b-582e310ddff1-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7\" (UID: \"412dc62a-d25e-4820-947b-582e310ddff1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.424893 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s57fs\" (UniqueName: \"kubernetes.io/projected/412dc62a-d25e-4820-947b-582e310ddff1-kube-api-access-s57fs\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7\" (UID: \"412dc62a-d25e-4820-947b-582e310ddff1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.425061 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/412dc62a-d25e-4820-947b-582e310ddff1-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7\" (UID: \"412dc62a-d25e-4820-947b-582e310ddff1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.526554 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/412dc62a-d25e-4820-947b-582e310ddff1-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7\" (UID: \"412dc62a-d25e-4820-947b-582e310ddff1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.526660 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/412dc62a-d25e-4820-947b-582e310ddff1-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7\" (UID: \"412dc62a-d25e-4820-947b-582e310ddff1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.526697 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s57fs\" (UniqueName: \"kubernetes.io/projected/412dc62a-d25e-4820-947b-582e310ddff1-kube-api-access-s57fs\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7\" (UID: \"412dc62a-d25e-4820-947b-582e310ddff1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.531266 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/412dc62a-d25e-4820-947b-582e310ddff1-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7\" (UID: \"412dc62a-d25e-4820-947b-582e310ddff1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.533386 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/412dc62a-d25e-4820-947b-582e310ddff1-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7\" (UID: \"412dc62a-d25e-4820-947b-582e310ddff1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.548882 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s57fs\" (UniqueName: \"kubernetes.io/projected/412dc62a-d25e-4820-947b-582e310ddff1-kube-api-access-s57fs\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7\" (UID: \"412dc62a-d25e-4820-947b-582e310ddff1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" Feb 19 15:40:27 crc kubenswrapper[4810]: I0219 15:40:27.665062 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" Feb 19 15:40:28 crc kubenswrapper[4810]: I0219 15:40:28.286134 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7"] Feb 19 15:40:29 crc kubenswrapper[4810]: I0219 15:40:29.266946 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" event={"ID":"412dc62a-d25e-4820-947b-582e310ddff1","Type":"ContainerStarted","Data":"4a7d466ee6eb7612f01efe84e9b3560e7b39b5e659599c6f4786a572b3f2f9b0"} Feb 19 15:40:29 crc kubenswrapper[4810]: I0219 15:40:29.267315 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" event={"ID":"412dc62a-d25e-4820-947b-582e310ddff1","Type":"ContainerStarted","Data":"4a1d9d8f611ac0fb3da1c822934589aaede54eb9cff7b74b0f86b60eca68033e"} Feb 19 15:40:29 crc kubenswrapper[4810]: I0219 15:40:29.295932 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" podStartSLOduration=1.828377057 podStartE2EDuration="2.295903356s" podCreationTimestamp="2026-02-19 15:40:27 +0000 UTC" firstStartedPulling="2026-02-19 15:40:28.292012124 +0000 UTC m=+1857.774042288" lastFinishedPulling="2026-02-19 15:40:28.759538463 +0000 UTC m=+1858.241568587" observedRunningTime="2026-02-19 15:40:29.291109076 +0000 UTC m=+1858.773139200" watchObservedRunningTime="2026-02-19 15:40:29.295903356 +0000 UTC m=+1858.777933510" Feb 19 15:40:34 crc kubenswrapper[4810]: I0219 15:40:34.064115 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-w2s7h"] Feb 19 15:40:34 crc kubenswrapper[4810]: I0219 15:40:34.085774 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-nxd5j"] Feb 19 15:40:34 crc kubenswrapper[4810]: I0219 15:40:34.097883 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-8ftxl"] Feb 19 15:40:34 crc kubenswrapper[4810]: I0219 15:40:34.108910 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-w2s7h"] Feb 19 15:40:34 crc kubenswrapper[4810]: I0219 15:40:34.119659 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-8ftxl"] Feb 19 15:40:34 crc kubenswrapper[4810]: I0219 15:40:34.128401 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-nxd5j"] Feb 19 15:40:34 crc kubenswrapper[4810]: I0219 15:40:34.326494 4810 generic.go:334] "Generic (PLEG): container finished" podID="412dc62a-d25e-4820-947b-582e310ddff1" containerID="4a7d466ee6eb7612f01efe84e9b3560e7b39b5e659599c6f4786a572b3f2f9b0" exitCode=0 Feb 19 15:40:34 crc kubenswrapper[4810]: I0219 15:40:34.326537 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" event={"ID":"412dc62a-d25e-4820-947b-582e310ddff1","Type":"ContainerDied","Data":"4a7d466ee6eb7612f01efe84e9b3560e7b39b5e659599c6f4786a572b3f2f9b0"} Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.112086 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-841f-account-create-update-swd7q"] Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.128857 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-da99-account-create-update-4j7hb"] Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.137142 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6e67-account-create-update-lk6cv"] Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.147369 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-841f-account-create-update-swd7q"] Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.156645 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6e67-account-create-update-lk6cv"] Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.164047 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-da99-account-create-update-4j7hb"] Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.470065 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f19eb06-d11c-409b-8b7e-516c9a5db815" path="/var/lib/kubelet/pods/1f19eb06-d11c-409b-8b7e-516c9a5db815/volumes" Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.470975 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48e0d5d9-1d58-41a5-b740-8c8286edec31" path="/var/lib/kubelet/pods/48e0d5d9-1d58-41a5-b740-8c8286edec31/volumes" Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.471486 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="987d17ad-1427-4709-b5db-19fbb00e8a7c" path="/var/lib/kubelet/pods/987d17ad-1427-4709-b5db-19fbb00e8a7c/volumes" Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.472098 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1703853-2754-4348-8c45-dcd98ff5d429" path="/var/lib/kubelet/pods/d1703853-2754-4348-8c45-dcd98ff5d429/volumes" Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.473664 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da5e0166-d811-4dcd-9230-976dd1893c11" path="/var/lib/kubelet/pods/da5e0166-d811-4dcd-9230-976dd1893c11/volumes" Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.474232 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f04e1699-2be0-4dca-8e4a-73035fde359f" path="/var/lib/kubelet/pods/f04e1699-2be0-4dca-8e4a-73035fde359f/volumes" Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.792809 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.815071 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s57fs\" (UniqueName: \"kubernetes.io/projected/412dc62a-d25e-4820-947b-582e310ddff1-kube-api-access-s57fs\") pod \"412dc62a-d25e-4820-947b-582e310ddff1\" (UID: \"412dc62a-d25e-4820-947b-582e310ddff1\") " Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.815544 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/412dc62a-d25e-4820-947b-582e310ddff1-inventory\") pod \"412dc62a-d25e-4820-947b-582e310ddff1\" (UID: \"412dc62a-d25e-4820-947b-582e310ddff1\") " Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.815847 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/412dc62a-d25e-4820-947b-582e310ddff1-ssh-key-openstack-edpm-ipam\") pod \"412dc62a-d25e-4820-947b-582e310ddff1\" (UID: \"412dc62a-d25e-4820-947b-582e310ddff1\") " Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.821646 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/412dc62a-d25e-4820-947b-582e310ddff1-kube-api-access-s57fs" (OuterVolumeSpecName: "kube-api-access-s57fs") pod "412dc62a-d25e-4820-947b-582e310ddff1" (UID: "412dc62a-d25e-4820-947b-582e310ddff1"). InnerVolumeSpecName "kube-api-access-s57fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.850998 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/412dc62a-d25e-4820-947b-582e310ddff1-inventory" (OuterVolumeSpecName: "inventory") pod "412dc62a-d25e-4820-947b-582e310ddff1" (UID: "412dc62a-d25e-4820-947b-582e310ddff1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.865476 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/412dc62a-d25e-4820-947b-582e310ddff1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "412dc62a-d25e-4820-947b-582e310ddff1" (UID: "412dc62a-d25e-4820-947b-582e310ddff1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.917924 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/412dc62a-d25e-4820-947b-582e310ddff1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.917973 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s57fs\" (UniqueName: \"kubernetes.io/projected/412dc62a-d25e-4820-947b-582e310ddff1-kube-api-access-s57fs\") on node \"crc\" DevicePath \"\"" Feb 19 15:40:35 crc kubenswrapper[4810]: I0219 15:40:35.917987 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/412dc62a-d25e-4820-947b-582e310ddff1-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.347159 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" event={"ID":"412dc62a-d25e-4820-947b-582e310ddff1","Type":"ContainerDied","Data":"4a1d9d8f611ac0fb3da1c822934589aaede54eb9cff7b74b0f86b60eca68033e"} Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.347217 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a1d9d8f611ac0fb3da1c822934589aaede54eb9cff7b74b0f86b60eca68033e" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.348908 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.455724 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5"] Feb 19 15:40:36 crc kubenswrapper[4810]: E0219 15:40:36.456522 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="412dc62a-d25e-4820-947b-582e310ddff1" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.456549 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="412dc62a-d25e-4820-947b-582e310ddff1" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.457051 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="412dc62a-d25e-4820-947b-582e310ddff1" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.458503 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.463567 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.464145 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.468219 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.468443 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.468870 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5"] Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.538816 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s8kk5\" (UID: \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.538899 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9scx\" (UniqueName: \"kubernetes.io/projected/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-kube-api-access-m9scx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s8kk5\" (UID: \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.539100 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s8kk5\" (UID: \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.640761 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s8kk5\" (UID: \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.640852 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s8kk5\" (UID: \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.640900 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9scx\" (UniqueName: \"kubernetes.io/projected/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-kube-api-access-m9scx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s8kk5\" (UID: \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.644257 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s8kk5\" (UID: \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.647596 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s8kk5\" (UID: \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.656734 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9scx\" (UniqueName: \"kubernetes.io/projected/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-kube-api-access-m9scx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s8kk5\" (UID: \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" Feb 19 15:40:36 crc kubenswrapper[4810]: I0219 15:40:36.832685 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" Feb 19 15:40:37 crc kubenswrapper[4810]: I0219 15:40:37.403217 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5"] Feb 19 15:40:38 crc kubenswrapper[4810]: I0219 15:40:38.370411 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" event={"ID":"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b","Type":"ContainerStarted","Data":"66bd09d023dfa70c25f2018bd9b82f03017c1b6846777cd688b44fdbcc73f0a7"} Feb 19 15:40:38 crc kubenswrapper[4810]: I0219 15:40:38.370736 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" event={"ID":"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b","Type":"ContainerStarted","Data":"f9bc96f54f3b78da1f1da2ab5fb44990eaf75c7f9b3ec0fa37f91c60d09df681"} Feb 19 15:40:40 crc kubenswrapper[4810]: I0219 15:40:40.440632 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:40:40 crc kubenswrapper[4810]: E0219 15:40:40.441140 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:40:41 crc kubenswrapper[4810]: I0219 15:40:41.629291 4810 scope.go:117] "RemoveContainer" containerID="2fa804cbc29144cbaa9d2e4c3f648166e91009da2ed6d113042e7022e9308b2c" Feb 19 15:40:41 crc kubenswrapper[4810]: I0219 15:40:41.652573 4810 scope.go:117] "RemoveContainer" containerID="61fee9f5cc97dc9164d9a8b37259645ec27704b544f8031e79cd8630294aa448" Feb 19 15:40:41 crc kubenswrapper[4810]: I0219 15:40:41.715133 4810 scope.go:117] "RemoveContainer" containerID="7cb43c21f053a8d03036f06cd4952d1e70925f82267f48d6f2c4959f93a370e5" Feb 19 15:40:41 crc kubenswrapper[4810]: I0219 15:40:41.735639 4810 scope.go:117] "RemoveContainer" containerID="99cf896833f13eecd3fedefc31f58e2b88d17d37a7cb7ae1aea233b7d9a39af1" Feb 19 15:40:41 crc kubenswrapper[4810]: I0219 15:40:41.780374 4810 scope.go:117] "RemoveContainer" containerID="ead759deef71357ae0d9ddba72b509ea84ac0664aab15baecfde700a4dc84f66" Feb 19 15:40:41 crc kubenswrapper[4810]: I0219 15:40:41.822301 4810 scope.go:117] "RemoveContainer" containerID="d738b05f8038fce0f6f7dca977b306ea2c9695f4bc8b38cb001bb799b15410d3" Feb 19 15:40:41 crc kubenswrapper[4810]: I0219 15:40:41.866852 4810 scope.go:117] "RemoveContainer" containerID="ad28bfaa41efd8e4e6c465f81c081a0451a00386412156a5267acdd97840a40b" Feb 19 15:40:41 crc kubenswrapper[4810]: I0219 15:40:41.890379 4810 scope.go:117] "RemoveContainer" containerID="a519fabbf15898bb4c345dee03c392f33a7ca3106e889528c9a61a815ff5b000" Feb 19 15:40:51 crc kubenswrapper[4810]: I0219 15:40:51.445462 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:40:52 crc kubenswrapper[4810]: I0219 15:40:52.523051 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"4e3ef6a26491a979d9b15a4e163fa3567692b3b0eef18273908461c8a7758364"} Feb 19 15:40:52 crc kubenswrapper[4810]: I0219 15:40:52.548602 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" podStartSLOduration=15.972690283 podStartE2EDuration="16.548568357s" podCreationTimestamp="2026-02-19 15:40:36 +0000 UTC" firstStartedPulling="2026-02-19 15:40:37.408203722 +0000 UTC m=+1866.890233846" lastFinishedPulling="2026-02-19 15:40:37.984081786 +0000 UTC m=+1867.466111920" observedRunningTime="2026-02-19 15:40:38.392965565 +0000 UTC m=+1867.874995689" watchObservedRunningTime="2026-02-19 15:40:52.548568357 +0000 UTC m=+1882.030598521" Feb 19 15:41:09 crc kubenswrapper[4810]: I0219 15:41:09.056918 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-58xq9"] Feb 19 15:41:09 crc kubenswrapper[4810]: I0219 15:41:09.067724 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-58xq9"] Feb 19 15:41:09 crc kubenswrapper[4810]: I0219 15:41:09.454138 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="972d6f5e-3edf-4b6e-bdde-39c580caea31" path="/var/lib/kubelet/pods/972d6f5e-3edf-4b6e-bdde-39c580caea31/volumes" Feb 19 15:41:17 crc kubenswrapper[4810]: I0219 15:41:17.826233 4810 generic.go:334] "Generic (PLEG): container finished" podID="12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b" containerID="66bd09d023dfa70c25f2018bd9b82f03017c1b6846777cd688b44fdbcc73f0a7" exitCode=0 Feb 19 15:41:17 crc kubenswrapper[4810]: I0219 15:41:17.826382 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" event={"ID":"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b","Type":"ContainerDied","Data":"66bd09d023dfa70c25f2018bd9b82f03017c1b6846777cd688b44fdbcc73f0a7"} Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.337125 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.442924 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-ssh-key-openstack-edpm-ipam\") pod \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\" (UID: \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\") " Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.443498 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9scx\" (UniqueName: \"kubernetes.io/projected/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-kube-api-access-m9scx\") pod \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\" (UID: \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\") " Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.443568 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-inventory\") pod \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\" (UID: \"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b\") " Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.451635 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-kube-api-access-m9scx" (OuterVolumeSpecName: "kube-api-access-m9scx") pod "12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b" (UID: "12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b"). InnerVolumeSpecName "kube-api-access-m9scx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.471781 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-inventory" (OuterVolumeSpecName: "inventory") pod "12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b" (UID: "12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.475386 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b" (UID: "12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.547172 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9scx\" (UniqueName: \"kubernetes.io/projected/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-kube-api-access-m9scx\") on node \"crc\" DevicePath \"\"" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.547199 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.547210 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.858698 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" event={"ID":"12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b","Type":"ContainerDied","Data":"f9bc96f54f3b78da1f1da2ab5fb44990eaf75c7f9b3ec0fa37f91c60d09df681"} Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.858791 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9bc96f54f3b78da1f1da2ab5fb44990eaf75c7f9b3ec0fa37f91c60d09df681" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.858806 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s8kk5" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.987694 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d"] Feb 19 15:41:19 crc kubenswrapper[4810]: E0219 15:41:19.988114 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.988135 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.988389 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.989024 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.994661 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.994893 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.995083 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:41:19 crc kubenswrapper[4810]: I0219 15:41:19.998974 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:41:20 crc kubenswrapper[4810]: I0219 15:41:20.023533 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d"] Feb 19 15:41:20 crc kubenswrapper[4810]: I0219 15:41:20.059213 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e1f4472-242a-40a0-a574-9c3119fdb705-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cm25d\" (UID: \"7e1f4472-242a-40a0-a574-9c3119fdb705\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" Feb 19 15:41:20 crc kubenswrapper[4810]: I0219 15:41:20.059299 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vsg4\" (UniqueName: \"kubernetes.io/projected/7e1f4472-242a-40a0-a574-9c3119fdb705-kube-api-access-5vsg4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cm25d\" (UID: \"7e1f4472-242a-40a0-a574-9c3119fdb705\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" Feb 19 15:41:20 crc kubenswrapper[4810]: I0219 15:41:20.060064 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e1f4472-242a-40a0-a574-9c3119fdb705-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cm25d\" (UID: \"7e1f4472-242a-40a0-a574-9c3119fdb705\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" Feb 19 15:41:20 crc kubenswrapper[4810]: I0219 15:41:20.162474 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e1f4472-242a-40a0-a574-9c3119fdb705-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cm25d\" (UID: \"7e1f4472-242a-40a0-a574-9c3119fdb705\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" Feb 19 15:41:20 crc kubenswrapper[4810]: I0219 15:41:20.162554 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vsg4\" (UniqueName: \"kubernetes.io/projected/7e1f4472-242a-40a0-a574-9c3119fdb705-kube-api-access-5vsg4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cm25d\" (UID: \"7e1f4472-242a-40a0-a574-9c3119fdb705\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" Feb 19 15:41:20 crc kubenswrapper[4810]: I0219 15:41:20.162757 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e1f4472-242a-40a0-a574-9c3119fdb705-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cm25d\" (UID: \"7e1f4472-242a-40a0-a574-9c3119fdb705\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" Feb 19 15:41:20 crc kubenswrapper[4810]: I0219 15:41:20.168796 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e1f4472-242a-40a0-a574-9c3119fdb705-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cm25d\" (UID: \"7e1f4472-242a-40a0-a574-9c3119fdb705\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" Feb 19 15:41:20 crc kubenswrapper[4810]: I0219 15:41:20.169025 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e1f4472-242a-40a0-a574-9c3119fdb705-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cm25d\" (UID: \"7e1f4472-242a-40a0-a574-9c3119fdb705\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" Feb 19 15:41:20 crc kubenswrapper[4810]: I0219 15:41:20.180851 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vsg4\" (UniqueName: \"kubernetes.io/projected/7e1f4472-242a-40a0-a574-9c3119fdb705-kube-api-access-5vsg4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cm25d\" (UID: \"7e1f4472-242a-40a0-a574-9c3119fdb705\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" Feb 19 15:41:20 crc kubenswrapper[4810]: I0219 15:41:20.320147 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" Feb 19 15:41:20 crc kubenswrapper[4810]: I0219 15:41:20.949458 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d"] Feb 19 15:41:21 crc kubenswrapper[4810]: I0219 15:41:21.883064 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" event={"ID":"7e1f4472-242a-40a0-a574-9c3119fdb705","Type":"ContainerStarted","Data":"b27177a5822d1f5cc2c1b88795d4f16252ecccf22df3b242820cee1469f821d3"} Feb 19 15:41:21 crc kubenswrapper[4810]: I0219 15:41:21.883501 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" event={"ID":"7e1f4472-242a-40a0-a574-9c3119fdb705","Type":"ContainerStarted","Data":"b3ea6f11973dbfa2762088780fd16a66b97d9eb13ec3b90d8368472e7902ee01"} Feb 19 15:41:21 crc kubenswrapper[4810]: I0219 15:41:21.919176 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" podStartSLOduration=2.49172541 podStartE2EDuration="2.919150504s" podCreationTimestamp="2026-02-19 15:41:19 +0000 UTC" firstStartedPulling="2026-02-19 15:41:20.961808468 +0000 UTC m=+1910.443838622" lastFinishedPulling="2026-02-19 15:41:21.389233592 +0000 UTC m=+1910.871263716" observedRunningTime="2026-02-19 15:41:21.912832906 +0000 UTC m=+1911.394863060" watchObservedRunningTime="2026-02-19 15:41:21.919150504 +0000 UTC m=+1911.401180668" Feb 19 15:41:33 crc kubenswrapper[4810]: I0219 15:41:33.092889 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-k7bkw"] Feb 19 15:41:33 crc kubenswrapper[4810]: I0219 15:41:33.106362 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-k7bkw"] Feb 19 15:41:33 crc kubenswrapper[4810]: I0219 15:41:33.451969 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="299a53ac-e7e5-47a3-bf65-df5624b77717" path="/var/lib/kubelet/pods/299a53ac-e7e5-47a3-bf65-df5624b77717/volumes" Feb 19 15:41:36 crc kubenswrapper[4810]: I0219 15:41:36.032071 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hk2fs"] Feb 19 15:41:36 crc kubenswrapper[4810]: I0219 15:41:36.040524 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hk2fs"] Feb 19 15:41:37 crc kubenswrapper[4810]: I0219 15:41:37.458222 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f1a5ee7-3792-4f35-a967-80fb96c7df10" path="/var/lib/kubelet/pods/5f1a5ee7-3792-4f35-a967-80fb96c7df10/volumes" Feb 19 15:41:42 crc kubenswrapper[4810]: I0219 15:41:42.043724 4810 scope.go:117] "RemoveContainer" containerID="9c8648b58dedd6b14f6832bd1d2f895ecfd4e781a2433a653d4f48b76efb9fef" Feb 19 15:41:42 crc kubenswrapper[4810]: I0219 15:41:42.121382 4810 scope.go:117] "RemoveContainer" containerID="45859d708bbdd95af868748506ae358c82e96df75fa08cfe41661e0323e54c01" Feb 19 15:41:42 crc kubenswrapper[4810]: I0219 15:41:42.156603 4810 scope.go:117] "RemoveContainer" containerID="19b609d4be47506e6c511dded32be9ffbc5fec785d73d8309ff072ff0f1cf61d" Feb 19 15:42:09 crc kubenswrapper[4810]: I0219 15:42:09.304563 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kq8xx"] Feb 19 15:42:09 crc kubenswrapper[4810]: I0219 15:42:09.308117 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:09 crc kubenswrapper[4810]: I0219 15:42:09.324547 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq8xx"] Feb 19 15:42:09 crc kubenswrapper[4810]: I0219 15:42:09.408984 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8baaff2f-6fa7-4d65-b161-fc60f06aab23-catalog-content\") pod \"redhat-marketplace-kq8xx\" (UID: \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\") " pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:09 crc kubenswrapper[4810]: I0219 15:42:09.409202 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpfqz\" (UniqueName: \"kubernetes.io/projected/8baaff2f-6fa7-4d65-b161-fc60f06aab23-kube-api-access-qpfqz\") pod \"redhat-marketplace-kq8xx\" (UID: \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\") " pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:09 crc kubenswrapper[4810]: I0219 15:42:09.409702 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8baaff2f-6fa7-4d65-b161-fc60f06aab23-utilities\") pod \"redhat-marketplace-kq8xx\" (UID: \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\") " pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:09 crc kubenswrapper[4810]: I0219 15:42:09.511686 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpfqz\" (UniqueName: \"kubernetes.io/projected/8baaff2f-6fa7-4d65-b161-fc60f06aab23-kube-api-access-qpfqz\") pod \"redhat-marketplace-kq8xx\" (UID: \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\") " pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:09 crc kubenswrapper[4810]: I0219 15:42:09.511822 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8baaff2f-6fa7-4d65-b161-fc60f06aab23-utilities\") pod \"redhat-marketplace-kq8xx\" (UID: \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\") " pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:09 crc kubenswrapper[4810]: I0219 15:42:09.511877 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8baaff2f-6fa7-4d65-b161-fc60f06aab23-catalog-content\") pod \"redhat-marketplace-kq8xx\" (UID: \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\") " pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:09 crc kubenswrapper[4810]: I0219 15:42:09.512513 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8baaff2f-6fa7-4d65-b161-fc60f06aab23-catalog-content\") pod \"redhat-marketplace-kq8xx\" (UID: \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\") " pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:09 crc kubenswrapper[4810]: I0219 15:42:09.512489 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8baaff2f-6fa7-4d65-b161-fc60f06aab23-utilities\") pod \"redhat-marketplace-kq8xx\" (UID: \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\") " pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:09 crc kubenswrapper[4810]: I0219 15:42:09.544295 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpfqz\" (UniqueName: \"kubernetes.io/projected/8baaff2f-6fa7-4d65-b161-fc60f06aab23-kube-api-access-qpfqz\") pod \"redhat-marketplace-kq8xx\" (UID: \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\") " pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:09 crc kubenswrapper[4810]: I0219 15:42:09.684314 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:10 crc kubenswrapper[4810]: I0219 15:42:10.182928 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq8xx"] Feb 19 15:42:10 crc kubenswrapper[4810]: I0219 15:42:10.874463 4810 generic.go:334] "Generic (PLEG): container finished" podID="8baaff2f-6fa7-4d65-b161-fc60f06aab23" containerID="7119d702b43fa67da67f60fa699dc67bd8b6502948dc9469d587dcd095715132" exitCode=0 Feb 19 15:42:10 crc kubenswrapper[4810]: I0219 15:42:10.874518 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq8xx" event={"ID":"8baaff2f-6fa7-4d65-b161-fc60f06aab23","Type":"ContainerDied","Data":"7119d702b43fa67da67f60fa699dc67bd8b6502948dc9469d587dcd095715132"} Feb 19 15:42:10 crc kubenswrapper[4810]: I0219 15:42:10.875444 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq8xx" event={"ID":"8baaff2f-6fa7-4d65-b161-fc60f06aab23","Type":"ContainerStarted","Data":"200a3d1607f6afdf8f0e98c2f8769884320bf708b1b5a1c5baf2465219d5ac9f"} Feb 19 15:42:11 crc kubenswrapper[4810]: I0219 15:42:11.892731 4810 generic.go:334] "Generic (PLEG): container finished" podID="8baaff2f-6fa7-4d65-b161-fc60f06aab23" containerID="544d12af023b04fc865881934021ea362dce4ac24f0b5797bf6ae10b51163ac8" exitCode=0 Feb 19 15:42:11 crc kubenswrapper[4810]: I0219 15:42:11.892867 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq8xx" event={"ID":"8baaff2f-6fa7-4d65-b161-fc60f06aab23","Type":"ContainerDied","Data":"544d12af023b04fc865881934021ea362dce4ac24f0b5797bf6ae10b51163ac8"} Feb 19 15:42:12 crc kubenswrapper[4810]: I0219 15:42:12.904544 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq8xx" event={"ID":"8baaff2f-6fa7-4d65-b161-fc60f06aab23","Type":"ContainerStarted","Data":"c1460f24dfbc8a18564e64c9e3c106dcd6dc43419b56133dfc09067da642a36b"} Feb 19 15:42:12 crc kubenswrapper[4810]: I0219 15:42:12.929454 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kq8xx" podStartSLOduration=2.506465622 podStartE2EDuration="3.929421869s" podCreationTimestamp="2026-02-19 15:42:09 +0000 UTC" firstStartedPulling="2026-02-19 15:42:10.877043145 +0000 UTC m=+1960.359073279" lastFinishedPulling="2026-02-19 15:42:12.299999372 +0000 UTC m=+1961.782029526" observedRunningTime="2026-02-19 15:42:12.922074147 +0000 UTC m=+1962.404104291" watchObservedRunningTime="2026-02-19 15:42:12.929421869 +0000 UTC m=+1962.411451993" Feb 19 15:42:17 crc kubenswrapper[4810]: I0219 15:42:17.057578 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-hf7qg"] Feb 19 15:42:17 crc kubenswrapper[4810]: I0219 15:42:17.072399 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-hf7qg"] Feb 19 15:42:17 crc kubenswrapper[4810]: I0219 15:42:17.449058 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f834f671-3add-4bfc-8152-596d66e90f22" path="/var/lib/kubelet/pods/f834f671-3add-4bfc-8152-596d66e90f22/volumes" Feb 19 15:42:18 crc kubenswrapper[4810]: I0219 15:42:18.970349 4810 generic.go:334] "Generic (PLEG): container finished" podID="7e1f4472-242a-40a0-a574-9c3119fdb705" containerID="b27177a5822d1f5cc2c1b88795d4f16252ecccf22df3b242820cee1469f821d3" exitCode=0 Feb 19 15:42:18 crc kubenswrapper[4810]: I0219 15:42:18.970457 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" event={"ID":"7e1f4472-242a-40a0-a574-9c3119fdb705","Type":"ContainerDied","Data":"b27177a5822d1f5cc2c1b88795d4f16252ecccf22df3b242820cee1469f821d3"} Feb 19 15:42:19 crc kubenswrapper[4810]: I0219 15:42:19.684535 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:19 crc kubenswrapper[4810]: I0219 15:42:19.684618 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:19 crc kubenswrapper[4810]: I0219 15:42:19.756890 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:20 crc kubenswrapper[4810]: I0219 15:42:20.064241 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:20 crc kubenswrapper[4810]: I0219 15:42:20.114020 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq8xx"] Feb 19 15:42:20 crc kubenswrapper[4810]: I0219 15:42:20.453439 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" Feb 19 15:42:20 crc kubenswrapper[4810]: I0219 15:42:20.606151 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e1f4472-242a-40a0-a574-9c3119fdb705-inventory\") pod \"7e1f4472-242a-40a0-a574-9c3119fdb705\" (UID: \"7e1f4472-242a-40a0-a574-9c3119fdb705\") " Feb 19 15:42:20 crc kubenswrapper[4810]: I0219 15:42:20.606591 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e1f4472-242a-40a0-a574-9c3119fdb705-ssh-key-openstack-edpm-ipam\") pod \"7e1f4472-242a-40a0-a574-9c3119fdb705\" (UID: \"7e1f4472-242a-40a0-a574-9c3119fdb705\") " Feb 19 15:42:20 crc kubenswrapper[4810]: I0219 15:42:20.606732 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vsg4\" (UniqueName: \"kubernetes.io/projected/7e1f4472-242a-40a0-a574-9c3119fdb705-kube-api-access-5vsg4\") pod \"7e1f4472-242a-40a0-a574-9c3119fdb705\" (UID: \"7e1f4472-242a-40a0-a574-9c3119fdb705\") " Feb 19 15:42:20 crc kubenswrapper[4810]: I0219 15:42:20.615475 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e1f4472-242a-40a0-a574-9c3119fdb705-kube-api-access-5vsg4" (OuterVolumeSpecName: "kube-api-access-5vsg4") pod "7e1f4472-242a-40a0-a574-9c3119fdb705" (UID: "7e1f4472-242a-40a0-a574-9c3119fdb705"). InnerVolumeSpecName "kube-api-access-5vsg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:42:20 crc kubenswrapper[4810]: I0219 15:42:20.634397 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e1f4472-242a-40a0-a574-9c3119fdb705-inventory" (OuterVolumeSpecName: "inventory") pod "7e1f4472-242a-40a0-a574-9c3119fdb705" (UID: "7e1f4472-242a-40a0-a574-9c3119fdb705"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:42:20 crc kubenswrapper[4810]: I0219 15:42:20.636187 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e1f4472-242a-40a0-a574-9c3119fdb705-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7e1f4472-242a-40a0-a574-9c3119fdb705" (UID: "7e1f4472-242a-40a0-a574-9c3119fdb705"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:42:20 crc kubenswrapper[4810]: I0219 15:42:20.709718 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vsg4\" (UniqueName: \"kubernetes.io/projected/7e1f4472-242a-40a0-a574-9c3119fdb705-kube-api-access-5vsg4\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:20 crc kubenswrapper[4810]: I0219 15:42:20.709786 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e1f4472-242a-40a0-a574-9c3119fdb705-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:20 crc kubenswrapper[4810]: I0219 15:42:20.709804 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e1f4472-242a-40a0-a574-9c3119fdb705-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.000522 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.000511 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cm25d" event={"ID":"7e1f4472-242a-40a0-a574-9c3119fdb705","Type":"ContainerDied","Data":"b3ea6f11973dbfa2762088780fd16a66b97d9eb13ec3b90d8368472e7902ee01"} Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.000613 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3ea6f11973dbfa2762088780fd16a66b97d9eb13ec3b90d8368472e7902ee01" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.102403 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gw579"] Feb 19 15:42:21 crc kubenswrapper[4810]: E0219 15:42:21.102910 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e1f4472-242a-40a0-a574-9c3119fdb705" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.102931 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e1f4472-242a-40a0-a574-9c3119fdb705" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.103203 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e1f4472-242a-40a0-a574-9c3119fdb705" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.104069 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gw579" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.110064 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.110279 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.110307 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.111130 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.126035 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gw579"] Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.223441 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln6mc\" (UniqueName: \"kubernetes.io/projected/e3132ed5-687d-4cd1-a539-35c4766a27c1-kube-api-access-ln6mc\") pod \"ssh-known-hosts-edpm-deployment-gw579\" (UID: \"e3132ed5-687d-4cd1-a539-35c4766a27c1\") " pod="openstack/ssh-known-hosts-edpm-deployment-gw579" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.223973 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3132ed5-687d-4cd1-a539-35c4766a27c1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gw579\" (UID: \"e3132ed5-687d-4cd1-a539-35c4766a27c1\") " pod="openstack/ssh-known-hosts-edpm-deployment-gw579" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.224146 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3132ed5-687d-4cd1-a539-35c4766a27c1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gw579\" (UID: \"e3132ed5-687d-4cd1-a539-35c4766a27c1\") " pod="openstack/ssh-known-hosts-edpm-deployment-gw579" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.326791 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln6mc\" (UniqueName: \"kubernetes.io/projected/e3132ed5-687d-4cd1-a539-35c4766a27c1-kube-api-access-ln6mc\") pod \"ssh-known-hosts-edpm-deployment-gw579\" (UID: \"e3132ed5-687d-4cd1-a539-35c4766a27c1\") " pod="openstack/ssh-known-hosts-edpm-deployment-gw579" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.326940 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3132ed5-687d-4cd1-a539-35c4766a27c1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gw579\" (UID: \"e3132ed5-687d-4cd1-a539-35c4766a27c1\") " pod="openstack/ssh-known-hosts-edpm-deployment-gw579" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.327128 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3132ed5-687d-4cd1-a539-35c4766a27c1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gw579\" (UID: \"e3132ed5-687d-4cd1-a539-35c4766a27c1\") " pod="openstack/ssh-known-hosts-edpm-deployment-gw579" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.330749 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3132ed5-687d-4cd1-a539-35c4766a27c1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gw579\" (UID: \"e3132ed5-687d-4cd1-a539-35c4766a27c1\") " pod="openstack/ssh-known-hosts-edpm-deployment-gw579" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.340151 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3132ed5-687d-4cd1-a539-35c4766a27c1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gw579\" (UID: \"e3132ed5-687d-4cd1-a539-35c4766a27c1\") " pod="openstack/ssh-known-hosts-edpm-deployment-gw579" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.342452 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln6mc\" (UniqueName: \"kubernetes.io/projected/e3132ed5-687d-4cd1-a539-35c4766a27c1-kube-api-access-ln6mc\") pod \"ssh-known-hosts-edpm-deployment-gw579\" (UID: \"e3132ed5-687d-4cd1-a539-35c4766a27c1\") " pod="openstack/ssh-known-hosts-edpm-deployment-gw579" Feb 19 15:42:21 crc kubenswrapper[4810]: I0219 15:42:21.437065 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gw579" Feb 19 15:42:22 crc kubenswrapper[4810]: I0219 15:42:22.003538 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gw579"] Feb 19 15:42:22 crc kubenswrapper[4810]: W0219 15:42:22.008663 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3132ed5_687d_4cd1_a539_35c4766a27c1.slice/crio-7d3818ebf7dc58fe335ccb3662cba75b973de23e57f82872a69e287e71bf42d9 WatchSource:0}: Error finding container 7d3818ebf7dc58fe335ccb3662cba75b973de23e57f82872a69e287e71bf42d9: Status 404 returned error can't find the container with id 7d3818ebf7dc58fe335ccb3662cba75b973de23e57f82872a69e287e71bf42d9 Feb 19 15:42:22 crc kubenswrapper[4810]: I0219 15:42:22.010451 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kq8xx" podUID="8baaff2f-6fa7-4d65-b161-fc60f06aab23" containerName="registry-server" containerID="cri-o://c1460f24dfbc8a18564e64c9e3c106dcd6dc43419b56133dfc09067da642a36b" gracePeriod=2 Feb 19 15:42:22 crc kubenswrapper[4810]: I0219 15:42:22.966409 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:22 crc kubenswrapper[4810]: I0219 15:42:22.977717 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8baaff2f-6fa7-4d65-b161-fc60f06aab23-utilities\") pod \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\" (UID: \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\") " Feb 19 15:42:22 crc kubenswrapper[4810]: I0219 15:42:22.977971 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpfqz\" (UniqueName: \"kubernetes.io/projected/8baaff2f-6fa7-4d65-b161-fc60f06aab23-kube-api-access-qpfqz\") pod \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\" (UID: \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\") " Feb 19 15:42:22 crc kubenswrapper[4810]: I0219 15:42:22.978094 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8baaff2f-6fa7-4d65-b161-fc60f06aab23-catalog-content\") pod \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\" (UID: \"8baaff2f-6fa7-4d65-b161-fc60f06aab23\") " Feb 19 15:42:22 crc kubenswrapper[4810]: I0219 15:42:22.978548 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8baaff2f-6fa7-4d65-b161-fc60f06aab23-utilities" (OuterVolumeSpecName: "utilities") pod "8baaff2f-6fa7-4d65-b161-fc60f06aab23" (UID: "8baaff2f-6fa7-4d65-b161-fc60f06aab23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:42:22 crc kubenswrapper[4810]: I0219 15:42:22.978970 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8baaff2f-6fa7-4d65-b161-fc60f06aab23-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:22 crc kubenswrapper[4810]: I0219 15:42:22.986763 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8baaff2f-6fa7-4d65-b161-fc60f06aab23-kube-api-access-qpfqz" (OuterVolumeSpecName: "kube-api-access-qpfqz") pod "8baaff2f-6fa7-4d65-b161-fc60f06aab23" (UID: "8baaff2f-6fa7-4d65-b161-fc60f06aab23"). InnerVolumeSpecName "kube-api-access-qpfqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.020755 4810 generic.go:334] "Generic (PLEG): container finished" podID="8baaff2f-6fa7-4d65-b161-fc60f06aab23" containerID="c1460f24dfbc8a18564e64c9e3c106dcd6dc43419b56133dfc09067da642a36b" exitCode=0 Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.020836 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq8xx" event={"ID":"8baaff2f-6fa7-4d65-b161-fc60f06aab23","Type":"ContainerDied","Data":"c1460f24dfbc8a18564e64c9e3c106dcd6dc43419b56133dfc09067da642a36b"} Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.020855 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kq8xx" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.020864 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq8xx" event={"ID":"8baaff2f-6fa7-4d65-b161-fc60f06aab23","Type":"ContainerDied","Data":"200a3d1607f6afdf8f0e98c2f8769884320bf708b1b5a1c5baf2465219d5ac9f"} Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.020877 4810 scope.go:117] "RemoveContainer" containerID="c1460f24dfbc8a18564e64c9e3c106dcd6dc43419b56133dfc09067da642a36b" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.025214 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gw579" event={"ID":"e3132ed5-687d-4cd1-a539-35c4766a27c1","Type":"ContainerStarted","Data":"14c148a9427173b4d7525b859ee79d7ec5a4d987fa7589d147eead08832d7e45"} Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.025255 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gw579" event={"ID":"e3132ed5-687d-4cd1-a539-35c4766a27c1","Type":"ContainerStarted","Data":"7d3818ebf7dc58fe335ccb3662cba75b973de23e57f82872a69e287e71bf42d9"} Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.038882 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8baaff2f-6fa7-4d65-b161-fc60f06aab23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8baaff2f-6fa7-4d65-b161-fc60f06aab23" (UID: "8baaff2f-6fa7-4d65-b161-fc60f06aab23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.051441 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-gw579" podStartSLOduration=1.524778661 podStartE2EDuration="2.051415306s" podCreationTimestamp="2026-02-19 15:42:21 +0000 UTC" firstStartedPulling="2026-02-19 15:42:22.01522478 +0000 UTC m=+1971.497254924" lastFinishedPulling="2026-02-19 15:42:22.541861435 +0000 UTC m=+1972.023891569" observedRunningTime="2026-02-19 15:42:23.046430882 +0000 UTC m=+1972.528461046" watchObservedRunningTime="2026-02-19 15:42:23.051415306 +0000 UTC m=+1972.533445530" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.068430 4810 scope.go:117] "RemoveContainer" containerID="544d12af023b04fc865881934021ea362dce4ac24f0b5797bf6ae10b51163ac8" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.080314 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpfqz\" (UniqueName: \"kubernetes.io/projected/8baaff2f-6fa7-4d65-b161-fc60f06aab23-kube-api-access-qpfqz\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.080486 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8baaff2f-6fa7-4d65-b161-fc60f06aab23-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.087500 4810 scope.go:117] "RemoveContainer" containerID="7119d702b43fa67da67f60fa699dc67bd8b6502948dc9469d587dcd095715132" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.106612 4810 scope.go:117] "RemoveContainer" containerID="c1460f24dfbc8a18564e64c9e3c106dcd6dc43419b56133dfc09067da642a36b" Feb 19 15:42:23 crc kubenswrapper[4810]: E0219 15:42:23.107001 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1460f24dfbc8a18564e64c9e3c106dcd6dc43419b56133dfc09067da642a36b\": container with ID starting with c1460f24dfbc8a18564e64c9e3c106dcd6dc43419b56133dfc09067da642a36b not found: ID does not exist" containerID="c1460f24dfbc8a18564e64c9e3c106dcd6dc43419b56133dfc09067da642a36b" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.107040 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1460f24dfbc8a18564e64c9e3c106dcd6dc43419b56133dfc09067da642a36b"} err="failed to get container status \"c1460f24dfbc8a18564e64c9e3c106dcd6dc43419b56133dfc09067da642a36b\": rpc error: code = NotFound desc = could not find container \"c1460f24dfbc8a18564e64c9e3c106dcd6dc43419b56133dfc09067da642a36b\": container with ID starting with c1460f24dfbc8a18564e64c9e3c106dcd6dc43419b56133dfc09067da642a36b not found: ID does not exist" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.107064 4810 scope.go:117] "RemoveContainer" containerID="544d12af023b04fc865881934021ea362dce4ac24f0b5797bf6ae10b51163ac8" Feb 19 15:42:23 crc kubenswrapper[4810]: E0219 15:42:23.107397 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"544d12af023b04fc865881934021ea362dce4ac24f0b5797bf6ae10b51163ac8\": container with ID starting with 544d12af023b04fc865881934021ea362dce4ac24f0b5797bf6ae10b51163ac8 not found: ID does not exist" containerID="544d12af023b04fc865881934021ea362dce4ac24f0b5797bf6ae10b51163ac8" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.107443 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"544d12af023b04fc865881934021ea362dce4ac24f0b5797bf6ae10b51163ac8"} err="failed to get container status \"544d12af023b04fc865881934021ea362dce4ac24f0b5797bf6ae10b51163ac8\": rpc error: code = NotFound desc = could not find container \"544d12af023b04fc865881934021ea362dce4ac24f0b5797bf6ae10b51163ac8\": container with ID starting with 544d12af023b04fc865881934021ea362dce4ac24f0b5797bf6ae10b51163ac8 not found: ID does not exist" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.107480 4810 scope.go:117] "RemoveContainer" containerID="7119d702b43fa67da67f60fa699dc67bd8b6502948dc9469d587dcd095715132" Feb 19 15:42:23 crc kubenswrapper[4810]: E0219 15:42:23.107735 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7119d702b43fa67da67f60fa699dc67bd8b6502948dc9469d587dcd095715132\": container with ID starting with 7119d702b43fa67da67f60fa699dc67bd8b6502948dc9469d587dcd095715132 not found: ID does not exist" containerID="7119d702b43fa67da67f60fa699dc67bd8b6502948dc9469d587dcd095715132" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.107760 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7119d702b43fa67da67f60fa699dc67bd8b6502948dc9469d587dcd095715132"} err="failed to get container status \"7119d702b43fa67da67f60fa699dc67bd8b6502948dc9469d587dcd095715132\": rpc error: code = NotFound desc = could not find container \"7119d702b43fa67da67f60fa699dc67bd8b6502948dc9469d587dcd095715132\": container with ID starting with 7119d702b43fa67da67f60fa699dc67bd8b6502948dc9469d587dcd095715132 not found: ID does not exist" Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.384695 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq8xx"] Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.397757 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq8xx"] Feb 19 15:42:23 crc kubenswrapper[4810]: I0219 15:42:23.462866 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8baaff2f-6fa7-4d65-b161-fc60f06aab23" path="/var/lib/kubelet/pods/8baaff2f-6fa7-4d65-b161-fc60f06aab23/volumes" Feb 19 15:42:30 crc kubenswrapper[4810]: I0219 15:42:30.109545 4810 generic.go:334] "Generic (PLEG): container finished" podID="e3132ed5-687d-4cd1-a539-35c4766a27c1" containerID="14c148a9427173b4d7525b859ee79d7ec5a4d987fa7589d147eead08832d7e45" exitCode=0 Feb 19 15:42:30 crc kubenswrapper[4810]: I0219 15:42:30.109681 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gw579" event={"ID":"e3132ed5-687d-4cd1-a539-35c4766a27c1","Type":"ContainerDied","Data":"14c148a9427173b4d7525b859ee79d7ec5a4d987fa7589d147eead08832d7e45"} Feb 19 15:42:31 crc kubenswrapper[4810]: I0219 15:42:31.669634 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gw579" Feb 19 15:42:31 crc kubenswrapper[4810]: I0219 15:42:31.774485 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3132ed5-687d-4cd1-a539-35c4766a27c1-inventory-0\") pod \"e3132ed5-687d-4cd1-a539-35c4766a27c1\" (UID: \"e3132ed5-687d-4cd1-a539-35c4766a27c1\") " Feb 19 15:42:31 crc kubenswrapper[4810]: I0219 15:42:31.775108 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln6mc\" (UniqueName: \"kubernetes.io/projected/e3132ed5-687d-4cd1-a539-35c4766a27c1-kube-api-access-ln6mc\") pod \"e3132ed5-687d-4cd1-a539-35c4766a27c1\" (UID: \"e3132ed5-687d-4cd1-a539-35c4766a27c1\") " Feb 19 15:42:31 crc kubenswrapper[4810]: I0219 15:42:31.775244 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3132ed5-687d-4cd1-a539-35c4766a27c1-ssh-key-openstack-edpm-ipam\") pod \"e3132ed5-687d-4cd1-a539-35c4766a27c1\" (UID: \"e3132ed5-687d-4cd1-a539-35c4766a27c1\") " Feb 19 15:42:31 crc kubenswrapper[4810]: I0219 15:42:31.785320 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3132ed5-687d-4cd1-a539-35c4766a27c1-kube-api-access-ln6mc" (OuterVolumeSpecName: "kube-api-access-ln6mc") pod "e3132ed5-687d-4cd1-a539-35c4766a27c1" (UID: "e3132ed5-687d-4cd1-a539-35c4766a27c1"). InnerVolumeSpecName "kube-api-access-ln6mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:42:31 crc kubenswrapper[4810]: I0219 15:42:31.815433 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3132ed5-687d-4cd1-a539-35c4766a27c1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e3132ed5-687d-4cd1-a539-35c4766a27c1" (UID: "e3132ed5-687d-4cd1-a539-35c4766a27c1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:42:31 crc kubenswrapper[4810]: I0219 15:42:31.836607 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3132ed5-687d-4cd1-a539-35c4766a27c1-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e3132ed5-687d-4cd1-a539-35c4766a27c1" (UID: "e3132ed5-687d-4cd1-a539-35c4766a27c1"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:42:31 crc kubenswrapper[4810]: I0219 15:42:31.877386 4810 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3132ed5-687d-4cd1-a539-35c4766a27c1-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:31 crc kubenswrapper[4810]: I0219 15:42:31.877435 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln6mc\" (UniqueName: \"kubernetes.io/projected/e3132ed5-687d-4cd1-a539-35c4766a27c1-kube-api-access-ln6mc\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:31 crc kubenswrapper[4810]: I0219 15:42:31.877459 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3132ed5-687d-4cd1-a539-35c4766a27c1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.133043 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gw579" event={"ID":"e3132ed5-687d-4cd1-a539-35c4766a27c1","Type":"ContainerDied","Data":"7d3818ebf7dc58fe335ccb3662cba75b973de23e57f82872a69e287e71bf42d9"} Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.133092 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d3818ebf7dc58fe335ccb3662cba75b973de23e57f82872a69e287e71bf42d9" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.133092 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gw579" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.242744 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v"] Feb 19 15:42:32 crc kubenswrapper[4810]: E0219 15:42:32.243501 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8baaff2f-6fa7-4d65-b161-fc60f06aab23" containerName="registry-server" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.243544 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8baaff2f-6fa7-4d65-b161-fc60f06aab23" containerName="registry-server" Feb 19 15:42:32 crc kubenswrapper[4810]: E0219 15:42:32.243592 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8baaff2f-6fa7-4d65-b161-fc60f06aab23" containerName="extract-content" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.243612 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8baaff2f-6fa7-4d65-b161-fc60f06aab23" containerName="extract-content" Feb 19 15:42:32 crc kubenswrapper[4810]: E0219 15:42:32.243664 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3132ed5-687d-4cd1-a539-35c4766a27c1" containerName="ssh-known-hosts-edpm-deployment" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.243684 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3132ed5-687d-4cd1-a539-35c4766a27c1" containerName="ssh-known-hosts-edpm-deployment" Feb 19 15:42:32 crc kubenswrapper[4810]: E0219 15:42:32.243727 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8baaff2f-6fa7-4d65-b161-fc60f06aab23" containerName="extract-utilities" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.243744 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8baaff2f-6fa7-4d65-b161-fc60f06aab23" containerName="extract-utilities" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.244214 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="8baaff2f-6fa7-4d65-b161-fc60f06aab23" containerName="registry-server" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.244273 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3132ed5-687d-4cd1-a539-35c4766a27c1" containerName="ssh-known-hosts-edpm-deployment" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.245698 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.250509 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.250932 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.251114 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.251266 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.256372 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v"] Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.287034 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e77512a1-b460-4008-9e59-5b38f3e9f925-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf56v\" (UID: \"e77512a1-b460-4008-9e59-5b38f3e9f925\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.287137 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwsz8\" (UniqueName: \"kubernetes.io/projected/e77512a1-b460-4008-9e59-5b38f3e9f925-kube-api-access-cwsz8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf56v\" (UID: \"e77512a1-b460-4008-9e59-5b38f3e9f925\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.287200 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e77512a1-b460-4008-9e59-5b38f3e9f925-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf56v\" (UID: \"e77512a1-b460-4008-9e59-5b38f3e9f925\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.389602 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e77512a1-b460-4008-9e59-5b38f3e9f925-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf56v\" (UID: \"e77512a1-b460-4008-9e59-5b38f3e9f925\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.390052 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e77512a1-b460-4008-9e59-5b38f3e9f925-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf56v\" (UID: \"e77512a1-b460-4008-9e59-5b38f3e9f925\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.390246 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwsz8\" (UniqueName: \"kubernetes.io/projected/e77512a1-b460-4008-9e59-5b38f3e9f925-kube-api-access-cwsz8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf56v\" (UID: \"e77512a1-b460-4008-9e59-5b38f3e9f925\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.397013 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e77512a1-b460-4008-9e59-5b38f3e9f925-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf56v\" (UID: \"e77512a1-b460-4008-9e59-5b38f3e9f925\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.397601 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e77512a1-b460-4008-9e59-5b38f3e9f925-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf56v\" (UID: \"e77512a1-b460-4008-9e59-5b38f3e9f925\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.408004 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwsz8\" (UniqueName: \"kubernetes.io/projected/e77512a1-b460-4008-9e59-5b38f3e9f925-kube-api-access-cwsz8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rf56v\" (UID: \"e77512a1-b460-4008-9e59-5b38f3e9f925\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.572416 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" Feb 19 15:42:32 crc kubenswrapper[4810]: W0219 15:42:32.969151 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode77512a1_b460_4008_9e59_5b38f3e9f925.slice/crio-e8cbae0924331351ea94389baefdade5c34ca28c724f53efa2a6641fd49957fb WatchSource:0}: Error finding container e8cbae0924331351ea94389baefdade5c34ca28c724f53efa2a6641fd49957fb: Status 404 returned error can't find the container with id e8cbae0924331351ea94389baefdade5c34ca28c724f53efa2a6641fd49957fb Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.969384 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v"] Feb 19 15:42:32 crc kubenswrapper[4810]: I0219 15:42:32.972674 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:42:33 crc kubenswrapper[4810]: I0219 15:42:33.142375 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" event={"ID":"e77512a1-b460-4008-9e59-5b38f3e9f925","Type":"ContainerStarted","Data":"e8cbae0924331351ea94389baefdade5c34ca28c724f53efa2a6641fd49957fb"} Feb 19 15:42:34 crc kubenswrapper[4810]: I0219 15:42:34.160155 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" event={"ID":"e77512a1-b460-4008-9e59-5b38f3e9f925","Type":"ContainerStarted","Data":"4ba117d816027c740774a62d5ef2e96fec877e6b49c00a3b6d56dcbd8475ed9f"} Feb 19 15:42:34 crc kubenswrapper[4810]: I0219 15:42:34.196601 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" podStartSLOduration=1.804856859 podStartE2EDuration="2.196577804s" podCreationTimestamp="2026-02-19 15:42:32 +0000 UTC" firstStartedPulling="2026-02-19 15:42:32.972237957 +0000 UTC m=+1982.454268081" lastFinishedPulling="2026-02-19 15:42:33.363958902 +0000 UTC m=+1982.845989026" observedRunningTime="2026-02-19 15:42:34.187198691 +0000 UTC m=+1983.669228835" watchObservedRunningTime="2026-02-19 15:42:34.196577804 +0000 UTC m=+1983.678607928" Feb 19 15:42:42 crc kubenswrapper[4810]: I0219 15:42:42.245130 4810 generic.go:334] "Generic (PLEG): container finished" podID="e77512a1-b460-4008-9e59-5b38f3e9f925" containerID="4ba117d816027c740774a62d5ef2e96fec877e6b49c00a3b6d56dcbd8475ed9f" exitCode=0 Feb 19 15:42:42 crc kubenswrapper[4810]: I0219 15:42:42.245243 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" event={"ID":"e77512a1-b460-4008-9e59-5b38f3e9f925","Type":"ContainerDied","Data":"4ba117d816027c740774a62d5ef2e96fec877e6b49c00a3b6d56dcbd8475ed9f"} Feb 19 15:42:42 crc kubenswrapper[4810]: I0219 15:42:42.340685 4810 scope.go:117] "RemoveContainer" containerID="eb023c9cd6a803467c474cff2a48a3d6536859d9cdbc3785ab4eb9814aa6c925" Feb 19 15:42:43 crc kubenswrapper[4810]: I0219 15:42:43.789387 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" Feb 19 15:42:43 crc kubenswrapper[4810]: I0219 15:42:43.936798 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e77512a1-b460-4008-9e59-5b38f3e9f925-inventory\") pod \"e77512a1-b460-4008-9e59-5b38f3e9f925\" (UID: \"e77512a1-b460-4008-9e59-5b38f3e9f925\") " Feb 19 15:42:43 crc kubenswrapper[4810]: I0219 15:42:43.936959 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwsz8\" (UniqueName: \"kubernetes.io/projected/e77512a1-b460-4008-9e59-5b38f3e9f925-kube-api-access-cwsz8\") pod \"e77512a1-b460-4008-9e59-5b38f3e9f925\" (UID: \"e77512a1-b460-4008-9e59-5b38f3e9f925\") " Feb 19 15:42:43 crc kubenswrapper[4810]: I0219 15:42:43.937009 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e77512a1-b460-4008-9e59-5b38f3e9f925-ssh-key-openstack-edpm-ipam\") pod \"e77512a1-b460-4008-9e59-5b38f3e9f925\" (UID: \"e77512a1-b460-4008-9e59-5b38f3e9f925\") " Feb 19 15:42:43 crc kubenswrapper[4810]: I0219 15:42:43.943740 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e77512a1-b460-4008-9e59-5b38f3e9f925-kube-api-access-cwsz8" (OuterVolumeSpecName: "kube-api-access-cwsz8") pod "e77512a1-b460-4008-9e59-5b38f3e9f925" (UID: "e77512a1-b460-4008-9e59-5b38f3e9f925"). InnerVolumeSpecName "kube-api-access-cwsz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:42:43 crc kubenswrapper[4810]: I0219 15:42:43.965587 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e77512a1-b460-4008-9e59-5b38f3e9f925-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e77512a1-b460-4008-9e59-5b38f3e9f925" (UID: "e77512a1-b460-4008-9e59-5b38f3e9f925"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:42:43 crc kubenswrapper[4810]: I0219 15:42:43.970776 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e77512a1-b460-4008-9e59-5b38f3e9f925-inventory" (OuterVolumeSpecName: "inventory") pod "e77512a1-b460-4008-9e59-5b38f3e9f925" (UID: "e77512a1-b460-4008-9e59-5b38f3e9f925"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.040558 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e77512a1-b460-4008-9e59-5b38f3e9f925-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.040609 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwsz8\" (UniqueName: \"kubernetes.io/projected/e77512a1-b460-4008-9e59-5b38f3e9f925-kube-api-access-cwsz8\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.040631 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e77512a1-b460-4008-9e59-5b38f3e9f925-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.264454 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" event={"ID":"e77512a1-b460-4008-9e59-5b38f3e9f925","Type":"ContainerDied","Data":"e8cbae0924331351ea94389baefdade5c34ca28c724f53efa2a6641fd49957fb"} Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.264489 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8cbae0924331351ea94389baefdade5c34ca28c724f53efa2a6641fd49957fb" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.264500 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rf56v" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.411628 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669"] Feb 19 15:42:44 crc kubenswrapper[4810]: E0219 15:42:44.412409 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e77512a1-b460-4008-9e59-5b38f3e9f925" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.412472 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e77512a1-b460-4008-9e59-5b38f3e9f925" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.412849 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e77512a1-b460-4008-9e59-5b38f3e9f925" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.414228 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.416550 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.416676 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.417092 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.417181 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.425599 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669"] Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.551545 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5r45\" (UniqueName: \"kubernetes.io/projected/69d67433-38d6-4368-a621-254a97b0c619-kube-api-access-q5r45\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ps669\" (UID: \"69d67433-38d6-4368-a621-254a97b0c619\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.551601 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69d67433-38d6-4368-a621-254a97b0c619-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ps669\" (UID: \"69d67433-38d6-4368-a621-254a97b0c619\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.551726 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69d67433-38d6-4368-a621-254a97b0c619-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ps669\" (UID: \"69d67433-38d6-4368-a621-254a97b0c619\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.654627 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5r45\" (UniqueName: \"kubernetes.io/projected/69d67433-38d6-4368-a621-254a97b0c619-kube-api-access-q5r45\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ps669\" (UID: \"69d67433-38d6-4368-a621-254a97b0c619\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.654745 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69d67433-38d6-4368-a621-254a97b0c619-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ps669\" (UID: \"69d67433-38d6-4368-a621-254a97b0c619\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.654979 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69d67433-38d6-4368-a621-254a97b0c619-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ps669\" (UID: \"69d67433-38d6-4368-a621-254a97b0c619\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.658725 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69d67433-38d6-4368-a621-254a97b0c619-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ps669\" (UID: \"69d67433-38d6-4368-a621-254a97b0c619\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.659255 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69d67433-38d6-4368-a621-254a97b0c619-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ps669\" (UID: \"69d67433-38d6-4368-a621-254a97b0c619\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.671856 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5r45\" (UniqueName: \"kubernetes.io/projected/69d67433-38d6-4368-a621-254a97b0c619-kube-api-access-q5r45\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ps669\" (UID: \"69d67433-38d6-4368-a621-254a97b0c619\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" Feb 19 15:42:44 crc kubenswrapper[4810]: I0219 15:42:44.737241 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" Feb 19 15:42:45 crc kubenswrapper[4810]: I0219 15:42:45.420192 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669"] Feb 19 15:42:46 crc kubenswrapper[4810]: I0219 15:42:46.285811 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" event={"ID":"69d67433-38d6-4368-a621-254a97b0c619","Type":"ContainerStarted","Data":"bb2836be18d58e4e1b432c36d80b7fd905a535643a6510de9f3063d41bfc5c9d"} Feb 19 15:42:46 crc kubenswrapper[4810]: I0219 15:42:46.286162 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" event={"ID":"69d67433-38d6-4368-a621-254a97b0c619","Type":"ContainerStarted","Data":"aa1393d62d799aebfca60fb4048ba6c87b3bd42bf6b288655c0b82b2157a23ec"} Feb 19 15:42:46 crc kubenswrapper[4810]: I0219 15:42:46.309983 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" podStartSLOduration=1.8658423229999999 podStartE2EDuration="2.309963769s" podCreationTimestamp="2026-02-19 15:42:44 +0000 UTC" firstStartedPulling="2026-02-19 15:42:45.425296585 +0000 UTC m=+1994.907326709" lastFinishedPulling="2026-02-19 15:42:45.869418021 +0000 UTC m=+1995.351448155" observedRunningTime="2026-02-19 15:42:46.30194686 +0000 UTC m=+1995.783976994" watchObservedRunningTime="2026-02-19 15:42:46.309963769 +0000 UTC m=+1995.791993893" Feb 19 15:42:56 crc kubenswrapper[4810]: I0219 15:42:56.437919 4810 generic.go:334] "Generic (PLEG): container finished" podID="69d67433-38d6-4368-a621-254a97b0c619" containerID="bb2836be18d58e4e1b432c36d80b7fd905a535643a6510de9f3063d41bfc5c9d" exitCode=0 Feb 19 15:42:56 crc kubenswrapper[4810]: I0219 15:42:56.438533 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" event={"ID":"69d67433-38d6-4368-a621-254a97b0c619","Type":"ContainerDied","Data":"bb2836be18d58e4e1b432c36d80b7fd905a535643a6510de9f3063d41bfc5c9d"} Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.009019 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.128594 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69d67433-38d6-4368-a621-254a97b0c619-inventory\") pod \"69d67433-38d6-4368-a621-254a97b0c619\" (UID: \"69d67433-38d6-4368-a621-254a97b0c619\") " Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.128901 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69d67433-38d6-4368-a621-254a97b0c619-ssh-key-openstack-edpm-ipam\") pod \"69d67433-38d6-4368-a621-254a97b0c619\" (UID: \"69d67433-38d6-4368-a621-254a97b0c619\") " Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.129016 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5r45\" (UniqueName: \"kubernetes.io/projected/69d67433-38d6-4368-a621-254a97b0c619-kube-api-access-q5r45\") pod \"69d67433-38d6-4368-a621-254a97b0c619\" (UID: \"69d67433-38d6-4368-a621-254a97b0c619\") " Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.136157 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d67433-38d6-4368-a621-254a97b0c619-kube-api-access-q5r45" (OuterVolumeSpecName: "kube-api-access-q5r45") pod "69d67433-38d6-4368-a621-254a97b0c619" (UID: "69d67433-38d6-4368-a621-254a97b0c619"). InnerVolumeSpecName "kube-api-access-q5r45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.155621 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d67433-38d6-4368-a621-254a97b0c619-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "69d67433-38d6-4368-a621-254a97b0c619" (UID: "69d67433-38d6-4368-a621-254a97b0c619"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.175190 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d67433-38d6-4368-a621-254a97b0c619-inventory" (OuterVolumeSpecName: "inventory") pod "69d67433-38d6-4368-a621-254a97b0c619" (UID: "69d67433-38d6-4368-a621-254a97b0c619"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.232052 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69d67433-38d6-4368-a621-254a97b0c619-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.232106 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5r45\" (UniqueName: \"kubernetes.io/projected/69d67433-38d6-4368-a621-254a97b0c619-kube-api-access-q5r45\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.232121 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69d67433-38d6-4368-a621-254a97b0c619-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.459946 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" event={"ID":"69d67433-38d6-4368-a621-254a97b0c619","Type":"ContainerDied","Data":"aa1393d62d799aebfca60fb4048ba6c87b3bd42bf6b288655c0b82b2157a23ec"} Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.460225 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa1393d62d799aebfca60fb4048ba6c87b3bd42bf6b288655c0b82b2157a23ec" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.460058 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ps669" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.576877 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2"] Feb 19 15:42:58 crc kubenswrapper[4810]: E0219 15:42:58.577559 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d67433-38d6-4368-a621-254a97b0c619" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.577639 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d67433-38d6-4368-a621-254a97b0c619" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.577870 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d67433-38d6-4368-a621-254a97b0c619" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.578626 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.581513 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.581551 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.581639 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.582582 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.583561 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.583568 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.587344 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.587739 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.594577 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2"] Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741265 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741312 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741369 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzrf6\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-kube-api-access-mzrf6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741398 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741429 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741470 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741496 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741527 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741563 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741607 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741628 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741648 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741665 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.741735 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843465 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843539 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843595 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843621 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843640 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843658 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843694 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843731 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843749 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843779 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzrf6\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-kube-api-access-mzrf6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843797 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843817 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843852 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.843879 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.848390 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.848797 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.849924 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.850721 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.851494 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.851573 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.851842 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.855090 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.856547 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.856855 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.857640 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.859002 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.866385 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.872030 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzrf6\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-kube-api-access-mzrf6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:58 crc kubenswrapper[4810]: I0219 15:42:58.897671 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:42:59 crc kubenswrapper[4810]: W0219 15:42:59.509233 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31bd8fe5_f0b6_4463_a545_bdeb0c33b182.slice/crio-9697ec4be9aa47cb04a3a0aed3c914d6f0a0f8b3c8892a8798066b5a83ea1d92 WatchSource:0}: Error finding container 9697ec4be9aa47cb04a3a0aed3c914d6f0a0f8b3c8892a8798066b5a83ea1d92: Status 404 returned error can't find the container with id 9697ec4be9aa47cb04a3a0aed3c914d6f0a0f8b3c8892a8798066b5a83ea1d92 Feb 19 15:42:59 crc kubenswrapper[4810]: I0219 15:42:59.523548 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2"] Feb 19 15:43:00 crc kubenswrapper[4810]: I0219 15:43:00.488704 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" event={"ID":"31bd8fe5-f0b6-4463-a545-bdeb0c33b182","Type":"ContainerStarted","Data":"29281feef62bd69d0830d82f4dc58d875d408df7a070860546c94eda4eba5135"} Feb 19 15:43:00 crc kubenswrapper[4810]: I0219 15:43:00.489289 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" event={"ID":"31bd8fe5-f0b6-4463-a545-bdeb0c33b182","Type":"ContainerStarted","Data":"9697ec4be9aa47cb04a3a0aed3c914d6f0a0f8b3c8892a8798066b5a83ea1d92"} Feb 19 15:43:00 crc kubenswrapper[4810]: I0219 15:43:00.508248 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" podStartSLOduration=2.112100963 podStartE2EDuration="2.508230247s" podCreationTimestamp="2026-02-19 15:42:58 +0000 UTC" firstStartedPulling="2026-02-19 15:42:59.512765133 +0000 UTC m=+2008.994795267" lastFinishedPulling="2026-02-19 15:42:59.908894387 +0000 UTC m=+2009.390924551" observedRunningTime="2026-02-19 15:43:00.50594419 +0000 UTC m=+2009.987974314" watchObservedRunningTime="2026-02-19 15:43:00.508230247 +0000 UTC m=+2009.990260371" Feb 19 15:43:19 crc kubenswrapper[4810]: I0219 15:43:19.537121 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:43:19 crc kubenswrapper[4810]: I0219 15:43:19.538876 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:43:37 crc kubenswrapper[4810]: I0219 15:43:37.921378 4810 generic.go:334] "Generic (PLEG): container finished" podID="31bd8fe5-f0b6-4463-a545-bdeb0c33b182" containerID="29281feef62bd69d0830d82f4dc58d875d408df7a070860546c94eda4eba5135" exitCode=0 Feb 19 15:43:37 crc kubenswrapper[4810]: I0219 15:43:37.921415 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" event={"ID":"31bd8fe5-f0b6-4463-a545-bdeb0c33b182","Type":"ContainerDied","Data":"29281feef62bd69d0830d82f4dc58d875d408df7a070860546c94eda4eba5135"} Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.447816 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.528768 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-ovn-default-certs-0\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.528886 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-ssh-key-openstack-edpm-ipam\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.528927 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-telemetry-combined-ca-bundle\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.528962 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-repo-setup-combined-ca-bundle\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.528992 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-nova-combined-ca-bundle\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.529185 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-libvirt-combined-ca-bundle\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.529257 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.529318 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-ovn-combined-ca-bundle\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.529380 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.529436 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-bootstrap-combined-ca-bundle\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.529508 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzrf6\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-kube-api-access-mzrf6\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.529545 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-neutron-metadata-combined-ca-bundle\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.529616 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.529662 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-inventory\") pod \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\" (UID: \"31bd8fe5-f0b6-4463-a545-bdeb0c33b182\") " Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.535023 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.537247 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.537768 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.537991 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-kube-api-access-mzrf6" (OuterVolumeSpecName: "kube-api-access-mzrf6") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "kube-api-access-mzrf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.538300 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.538399 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.539160 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.539535 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.540570 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.541023 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.543557 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.551886 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.572055 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-inventory" (OuterVolumeSpecName: "inventory") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.592387 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "31bd8fe5-f0b6-4463-a545-bdeb0c33b182" (UID: "31bd8fe5-f0b6-4463-a545-bdeb0c33b182"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632057 4810 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632100 4810 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632115 4810 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632132 4810 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632144 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzrf6\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-kube-api-access-mzrf6\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632156 4810 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632170 4810 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632183 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632194 4810 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632207 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632219 4810 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632231 4810 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632242 4810 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.632256 4810 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd8fe5-f0b6-4463-a545-bdeb0c33b182-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.941803 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" event={"ID":"31bd8fe5-f0b6-4463-a545-bdeb0c33b182","Type":"ContainerDied","Data":"9697ec4be9aa47cb04a3a0aed3c914d6f0a0f8b3c8892a8798066b5a83ea1d92"} Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.941852 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9697ec4be9aa47cb04a3a0aed3c914d6f0a0f8b3c8892a8798066b5a83ea1d92" Feb 19 15:43:39 crc kubenswrapper[4810]: I0219 15:43:39.941874 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.140988 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx"] Feb 19 15:43:40 crc kubenswrapper[4810]: E0219 15:43:40.141662 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31bd8fe5-f0b6-4463-a545-bdeb0c33b182" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.141742 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="31bd8fe5-f0b6-4463-a545-bdeb0c33b182" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.142028 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="31bd8fe5-f0b6-4463-a545-bdeb0c33b182" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.142726 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.146039 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.146134 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.146428 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.146505 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.146556 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.169941 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx"] Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.243201 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.243270 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.243300 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrsm\" (UniqueName: \"kubernetes.io/projected/4defb710-c07f-4e63-9baf-45f51085abdc-kube-api-access-mcrsm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.243459 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.243510 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4defb710-c07f-4e63-9baf-45f51085abdc-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.345805 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.346104 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.346165 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcrsm\" (UniqueName: \"kubernetes.io/projected/4defb710-c07f-4e63-9baf-45f51085abdc-kube-api-access-mcrsm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.346245 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.346318 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4defb710-c07f-4e63-9baf-45f51085abdc-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.348106 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4defb710-c07f-4e63-9baf-45f51085abdc-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.356610 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.357535 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.358999 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.365443 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcrsm\" (UniqueName: \"kubernetes.io/projected/4defb710-c07f-4e63-9baf-45f51085abdc-kube-api-access-mcrsm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5gjxx\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:40 crc kubenswrapper[4810]: I0219 15:43:40.470675 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:43:41 crc kubenswrapper[4810]: I0219 15:43:41.052057 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx"] Feb 19 15:43:41 crc kubenswrapper[4810]: I0219 15:43:41.981722 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" event={"ID":"4defb710-c07f-4e63-9baf-45f51085abdc","Type":"ContainerStarted","Data":"c9bd171f35c2431c659b24fe1892d82835d93f77adac91954d4f80c34ebd5311"} Feb 19 15:43:41 crc kubenswrapper[4810]: I0219 15:43:41.983220 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" event={"ID":"4defb710-c07f-4e63-9baf-45f51085abdc","Type":"ContainerStarted","Data":"3bf7c888bacb4ee33c608237a690edff99f26fd352fd8968048d21acb796d0f0"} Feb 19 15:43:42 crc kubenswrapper[4810]: I0219 15:43:42.004703 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" podStartSLOduration=1.54626412 podStartE2EDuration="2.004677371s" podCreationTimestamp="2026-02-19 15:43:40 +0000 UTC" firstStartedPulling="2026-02-19 15:43:41.057627639 +0000 UTC m=+2050.539657783" lastFinishedPulling="2026-02-19 15:43:41.51604087 +0000 UTC m=+2050.998071034" observedRunningTime="2026-02-19 15:43:42.001725608 +0000 UTC m=+2051.483755762" watchObservedRunningTime="2026-02-19 15:43:42.004677371 +0000 UTC m=+2051.486707535" Feb 19 15:43:49 crc kubenswrapper[4810]: I0219 15:43:49.538415 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:43:49 crc kubenswrapper[4810]: I0219 15:43:49.539098 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:44:19 crc kubenswrapper[4810]: I0219 15:44:19.537508 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:44:19 crc kubenswrapper[4810]: I0219 15:44:19.538186 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:44:19 crc kubenswrapper[4810]: I0219 15:44:19.538270 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:44:19 crc kubenswrapper[4810]: I0219 15:44:19.539521 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e3ef6a26491a979d9b15a4e163fa3567692b3b0eef18273908461c8a7758364"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:44:19 crc kubenswrapper[4810]: I0219 15:44:19.539611 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://4e3ef6a26491a979d9b15a4e163fa3567692b3b0eef18273908461c8a7758364" gracePeriod=600 Feb 19 15:44:20 crc kubenswrapper[4810]: I0219 15:44:20.393861 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="4e3ef6a26491a979d9b15a4e163fa3567692b3b0eef18273908461c8a7758364" exitCode=0 Feb 19 15:44:20 crc kubenswrapper[4810]: I0219 15:44:20.393939 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"4e3ef6a26491a979d9b15a4e163fa3567692b3b0eef18273908461c8a7758364"} Feb 19 15:44:20 crc kubenswrapper[4810]: I0219 15:44:20.394517 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f"} Feb 19 15:44:20 crc kubenswrapper[4810]: I0219 15:44:20.394540 4810 scope.go:117] "RemoveContainer" containerID="2ce40b521710718c7ba6b5fc71023bb5882beb271696b0616786a7ce052f1e2a" Feb 19 15:44:52 crc kubenswrapper[4810]: E0219 15:44:52.015951 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4defb710_c07f_4e63_9baf_45f51085abdc.slice/crio-conmon-c9bd171f35c2431c659b24fe1892d82835d93f77adac91954d4f80c34ebd5311.scope\": RecentStats: unable to find data in memory cache]" Feb 19 15:44:52 crc kubenswrapper[4810]: I0219 15:44:52.764850 4810 generic.go:334] "Generic (PLEG): container finished" podID="4defb710-c07f-4e63-9baf-45f51085abdc" containerID="c9bd171f35c2431c659b24fe1892d82835d93f77adac91954d4f80c34ebd5311" exitCode=0 Feb 19 15:44:52 crc kubenswrapper[4810]: I0219 15:44:52.764991 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" event={"ID":"4defb710-c07f-4e63-9baf-45f51085abdc","Type":"ContainerDied","Data":"c9bd171f35c2431c659b24fe1892d82835d93f77adac91954d4f80c34ebd5311"} Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.353735 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.456360 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-inventory\") pod \"4defb710-c07f-4e63-9baf-45f51085abdc\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.456429 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-ovn-combined-ca-bundle\") pod \"4defb710-c07f-4e63-9baf-45f51085abdc\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.456545 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-ssh-key-openstack-edpm-ipam\") pod \"4defb710-c07f-4e63-9baf-45f51085abdc\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.456629 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4defb710-c07f-4e63-9baf-45f51085abdc-ovncontroller-config-0\") pod \"4defb710-c07f-4e63-9baf-45f51085abdc\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.456840 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcrsm\" (UniqueName: \"kubernetes.io/projected/4defb710-c07f-4e63-9baf-45f51085abdc-kube-api-access-mcrsm\") pod \"4defb710-c07f-4e63-9baf-45f51085abdc\" (UID: \"4defb710-c07f-4e63-9baf-45f51085abdc\") " Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.464193 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4defb710-c07f-4e63-9baf-45f51085abdc" (UID: "4defb710-c07f-4e63-9baf-45f51085abdc"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.468595 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4defb710-c07f-4e63-9baf-45f51085abdc-kube-api-access-mcrsm" (OuterVolumeSpecName: "kube-api-access-mcrsm") pod "4defb710-c07f-4e63-9baf-45f51085abdc" (UID: "4defb710-c07f-4e63-9baf-45f51085abdc"). InnerVolumeSpecName "kube-api-access-mcrsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.494178 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4defb710-c07f-4e63-9baf-45f51085abdc" (UID: "4defb710-c07f-4e63-9baf-45f51085abdc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.512289 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4defb710-c07f-4e63-9baf-45f51085abdc-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "4defb710-c07f-4e63-9baf-45f51085abdc" (UID: "4defb710-c07f-4e63-9baf-45f51085abdc"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.514787 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-inventory" (OuterVolumeSpecName: "inventory") pod "4defb710-c07f-4e63-9baf-45f51085abdc" (UID: "4defb710-c07f-4e63-9baf-45f51085abdc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.559990 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.560043 4810 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.560065 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4defb710-c07f-4e63-9baf-45f51085abdc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.560086 4810 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4defb710-c07f-4e63-9baf-45f51085abdc-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.560104 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcrsm\" (UniqueName: \"kubernetes.io/projected/4defb710-c07f-4e63-9baf-45f51085abdc-kube-api-access-mcrsm\") on node \"crc\" DevicePath \"\"" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.790152 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" event={"ID":"4defb710-c07f-4e63-9baf-45f51085abdc","Type":"ContainerDied","Data":"3bf7c888bacb4ee33c608237a690edff99f26fd352fd8968048d21acb796d0f0"} Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.790569 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf7c888bacb4ee33c608237a690edff99f26fd352fd8968048d21acb796d0f0" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.790284 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5gjxx" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.939301 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs"] Feb 19 15:44:54 crc kubenswrapper[4810]: E0219 15:44:54.939805 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4defb710-c07f-4e63-9baf-45f51085abdc" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.939829 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4defb710-c07f-4e63-9baf-45f51085abdc" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.940097 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="4defb710-c07f-4e63-9baf-45f51085abdc" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.940949 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.945930 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.946241 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.949245 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.952415 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.952828 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.953159 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:44:54 crc kubenswrapper[4810]: I0219 15:44:54.965724 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs"] Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.076643 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.076739 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkgqg\" (UniqueName: \"kubernetes.io/projected/6650a3db-fdc1-4342-b8a8-cb91376e75c5-kube-api-access-zkgqg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.077018 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.077105 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.077199 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.077403 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.179813 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.179964 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkgqg\" (UniqueName: \"kubernetes.io/projected/6650a3db-fdc1-4342-b8a8-cb91376e75c5-kube-api-access-zkgqg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.180044 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.180091 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.180151 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.181070 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.185551 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.185643 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.187725 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.188831 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.189848 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.201006 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkgqg\" (UniqueName: \"kubernetes.io/projected/6650a3db-fdc1-4342-b8a8-cb91376e75c5-kube-api-access-zkgqg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.275163 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:44:55 crc kubenswrapper[4810]: I0219 15:44:55.872072 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs"] Feb 19 15:44:56 crc kubenswrapper[4810]: I0219 15:44:56.813507 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" event={"ID":"6650a3db-fdc1-4342-b8a8-cb91376e75c5","Type":"ContainerStarted","Data":"0bfb481da1e582a9833bd1505ab13860f9c856ec5cedc6c8ae5ad71bbbe2c772"} Feb 19 15:44:56 crc kubenswrapper[4810]: I0219 15:44:56.814076 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" event={"ID":"6650a3db-fdc1-4342-b8a8-cb91376e75c5","Type":"ContainerStarted","Data":"8d2f457eb3a2c22efd1c5f686fd81de59f3738fe958c14608f0dae2bcba6c1cf"} Feb 19 15:44:56 crc kubenswrapper[4810]: I0219 15:44:56.846712 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" podStartSLOduration=2.403829057 podStartE2EDuration="2.846687016s" podCreationTimestamp="2026-02-19 15:44:54 +0000 UTC" firstStartedPulling="2026-02-19 15:44:55.893528551 +0000 UTC m=+2125.375558685" lastFinishedPulling="2026-02-19 15:44:56.33638651 +0000 UTC m=+2125.818416644" observedRunningTime="2026-02-19 15:44:56.833037417 +0000 UTC m=+2126.315067541" watchObservedRunningTime="2026-02-19 15:44:56.846687016 +0000 UTC m=+2126.328717150" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.134549 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb"] Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.137874 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.140971 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.188123 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.216461 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb"] Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.290620 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jqxd\" (UniqueName: \"kubernetes.io/projected/9ebe856b-d546-48e1-862d-d9f039620b73-kube-api-access-5jqxd\") pod \"collect-profiles-29525265-dnbnb\" (UID: \"9ebe856b-d546-48e1-862d-d9f039620b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.290674 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ebe856b-d546-48e1-862d-d9f039620b73-config-volume\") pod \"collect-profiles-29525265-dnbnb\" (UID: \"9ebe856b-d546-48e1-862d-d9f039620b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.290698 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ebe856b-d546-48e1-862d-d9f039620b73-secret-volume\") pod \"collect-profiles-29525265-dnbnb\" (UID: \"9ebe856b-d546-48e1-862d-d9f039620b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.393567 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ebe856b-d546-48e1-862d-d9f039620b73-config-volume\") pod \"collect-profiles-29525265-dnbnb\" (UID: \"9ebe856b-d546-48e1-862d-d9f039620b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.393635 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ebe856b-d546-48e1-862d-d9f039620b73-secret-volume\") pod \"collect-profiles-29525265-dnbnb\" (UID: \"9ebe856b-d546-48e1-862d-d9f039620b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.393880 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jqxd\" (UniqueName: \"kubernetes.io/projected/9ebe856b-d546-48e1-862d-d9f039620b73-kube-api-access-5jqxd\") pod \"collect-profiles-29525265-dnbnb\" (UID: \"9ebe856b-d546-48e1-862d-d9f039620b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.394456 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ebe856b-d546-48e1-862d-d9f039620b73-config-volume\") pod \"collect-profiles-29525265-dnbnb\" (UID: \"9ebe856b-d546-48e1-862d-d9f039620b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.402009 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ebe856b-d546-48e1-862d-d9f039620b73-secret-volume\") pod \"collect-profiles-29525265-dnbnb\" (UID: \"9ebe856b-d546-48e1-862d-d9f039620b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.418152 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jqxd\" (UniqueName: \"kubernetes.io/projected/9ebe856b-d546-48e1-862d-d9f039620b73-kube-api-access-5jqxd\") pod \"collect-profiles-29525265-dnbnb\" (UID: \"9ebe856b-d546-48e1-862d-d9f039620b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.506856 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" Feb 19 15:45:00 crc kubenswrapper[4810]: I0219 15:45:00.995829 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb"] Feb 19 15:45:01 crc kubenswrapper[4810]: I0219 15:45:01.862033 4810 generic.go:334] "Generic (PLEG): container finished" podID="9ebe856b-d546-48e1-862d-d9f039620b73" containerID="ea1fa5fe82d5994ff114d9b04616fab9d73e059f9ec50ceb138445dbcf8a33cf" exitCode=0 Feb 19 15:45:01 crc kubenswrapper[4810]: I0219 15:45:01.862074 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" event={"ID":"9ebe856b-d546-48e1-862d-d9f039620b73","Type":"ContainerDied","Data":"ea1fa5fe82d5994ff114d9b04616fab9d73e059f9ec50ceb138445dbcf8a33cf"} Feb 19 15:45:01 crc kubenswrapper[4810]: I0219 15:45:01.862345 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" event={"ID":"9ebe856b-d546-48e1-862d-d9f039620b73","Type":"ContainerStarted","Data":"3310a652d205d8ec8d61afc514f564de3aa5d2a0ad8d9c147b7c953675134dd1"} Feb 19 15:45:03 crc kubenswrapper[4810]: I0219 15:45:03.292603 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" Feb 19 15:45:03 crc kubenswrapper[4810]: I0219 15:45:03.379368 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jqxd\" (UniqueName: \"kubernetes.io/projected/9ebe856b-d546-48e1-862d-d9f039620b73-kube-api-access-5jqxd\") pod \"9ebe856b-d546-48e1-862d-d9f039620b73\" (UID: \"9ebe856b-d546-48e1-862d-d9f039620b73\") " Feb 19 15:45:03 crc kubenswrapper[4810]: I0219 15:45:03.379548 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ebe856b-d546-48e1-862d-d9f039620b73-config-volume\") pod \"9ebe856b-d546-48e1-862d-d9f039620b73\" (UID: \"9ebe856b-d546-48e1-862d-d9f039620b73\") " Feb 19 15:45:03 crc kubenswrapper[4810]: I0219 15:45:03.379589 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ebe856b-d546-48e1-862d-d9f039620b73-secret-volume\") pod \"9ebe856b-d546-48e1-862d-d9f039620b73\" (UID: \"9ebe856b-d546-48e1-862d-d9f039620b73\") " Feb 19 15:45:03 crc kubenswrapper[4810]: I0219 15:45:03.380686 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ebe856b-d546-48e1-862d-d9f039620b73-config-volume" (OuterVolumeSpecName: "config-volume") pod "9ebe856b-d546-48e1-862d-d9f039620b73" (UID: "9ebe856b-d546-48e1-862d-d9f039620b73"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:45:03 crc kubenswrapper[4810]: I0219 15:45:03.385107 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ebe856b-d546-48e1-862d-d9f039620b73-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9ebe856b-d546-48e1-862d-d9f039620b73" (UID: "9ebe856b-d546-48e1-862d-d9f039620b73"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:45:03 crc kubenswrapper[4810]: I0219 15:45:03.385652 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ebe856b-d546-48e1-862d-d9f039620b73-kube-api-access-5jqxd" (OuterVolumeSpecName: "kube-api-access-5jqxd") pod "9ebe856b-d546-48e1-862d-d9f039620b73" (UID: "9ebe856b-d546-48e1-862d-d9f039620b73"). InnerVolumeSpecName "kube-api-access-5jqxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:45:03 crc kubenswrapper[4810]: I0219 15:45:03.481642 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ebe856b-d546-48e1-862d-d9f039620b73-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:03 crc kubenswrapper[4810]: I0219 15:45:03.481687 4810 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ebe856b-d546-48e1-862d-d9f039620b73-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:03 crc kubenswrapper[4810]: I0219 15:45:03.481704 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jqxd\" (UniqueName: \"kubernetes.io/projected/9ebe856b-d546-48e1-862d-d9f039620b73-kube-api-access-5jqxd\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:03 crc kubenswrapper[4810]: I0219 15:45:03.891044 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" event={"ID":"9ebe856b-d546-48e1-862d-d9f039620b73","Type":"ContainerDied","Data":"3310a652d205d8ec8d61afc514f564de3aa5d2a0ad8d9c147b7c953675134dd1"} Feb 19 15:45:03 crc kubenswrapper[4810]: I0219 15:45:03.891401 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3310a652d205d8ec8d61afc514f564de3aa5d2a0ad8d9c147b7c953675134dd1" Feb 19 15:45:03 crc kubenswrapper[4810]: I0219 15:45:03.891479 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb" Feb 19 15:45:04 crc kubenswrapper[4810]: I0219 15:45:04.411642 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml"] Feb 19 15:45:04 crc kubenswrapper[4810]: I0219 15:45:04.425361 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525220-wqnml"] Feb 19 15:45:05 crc kubenswrapper[4810]: I0219 15:45:05.462289 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8383a9e3-149b-4512-a9fd-12cd0b65e370" path="/var/lib/kubelet/pods/8383a9e3-149b-4512-a9fd-12cd0b65e370/volumes" Feb 19 15:45:42 crc kubenswrapper[4810]: I0219 15:45:42.583214 4810 scope.go:117] "RemoveContainer" containerID="c19d21352b758655d40944eafe4d1d6cfee80125c13e6f74424f93ccd9aee7cf" Feb 19 15:45:49 crc kubenswrapper[4810]: I0219 15:45:49.455093 4810 generic.go:334] "Generic (PLEG): container finished" podID="6650a3db-fdc1-4342-b8a8-cb91376e75c5" containerID="0bfb481da1e582a9833bd1505ab13860f9c856ec5cedc6c8ae5ad71bbbe2c772" exitCode=0 Feb 19 15:45:49 crc kubenswrapper[4810]: I0219 15:45:49.455718 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" event={"ID":"6650a3db-fdc1-4342-b8a8-cb91376e75c5","Type":"ContainerDied","Data":"0bfb481da1e582a9833bd1505ab13860f9c856ec5cedc6c8ae5ad71bbbe2c772"} Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.048417 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.132418 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkgqg\" (UniqueName: \"kubernetes.io/projected/6650a3db-fdc1-4342-b8a8-cb91376e75c5-kube-api-access-zkgqg\") pod \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.132530 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-neutron-metadata-combined-ca-bundle\") pod \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.132588 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.132625 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-nova-metadata-neutron-config-0\") pod \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.132660 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-inventory\") pod \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.132817 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-ssh-key-openstack-edpm-ipam\") pod \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\" (UID: \"6650a3db-fdc1-4342-b8a8-cb91376e75c5\") " Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.139119 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6650a3db-fdc1-4342-b8a8-cb91376e75c5" (UID: "6650a3db-fdc1-4342-b8a8-cb91376e75c5"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.139306 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6650a3db-fdc1-4342-b8a8-cb91376e75c5-kube-api-access-zkgqg" (OuterVolumeSpecName: "kube-api-access-zkgqg") pod "6650a3db-fdc1-4342-b8a8-cb91376e75c5" (UID: "6650a3db-fdc1-4342-b8a8-cb91376e75c5"). InnerVolumeSpecName "kube-api-access-zkgqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.164689 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-inventory" (OuterVolumeSpecName: "inventory") pod "6650a3db-fdc1-4342-b8a8-cb91376e75c5" (UID: "6650a3db-fdc1-4342-b8a8-cb91376e75c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.165990 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "6650a3db-fdc1-4342-b8a8-cb91376e75c5" (UID: "6650a3db-fdc1-4342-b8a8-cb91376e75c5"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.182517 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "6650a3db-fdc1-4342-b8a8-cb91376e75c5" (UID: "6650a3db-fdc1-4342-b8a8-cb91376e75c5"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.182938 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6650a3db-fdc1-4342-b8a8-cb91376e75c5" (UID: "6650a3db-fdc1-4342-b8a8-cb91376e75c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.237871 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkgqg\" (UniqueName: \"kubernetes.io/projected/6650a3db-fdc1-4342-b8a8-cb91376e75c5-kube-api-access-zkgqg\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.237917 4810 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.237984 4810 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.238007 4810 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.238028 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.238045 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6650a3db-fdc1-4342-b8a8-cb91376e75c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.484029 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" event={"ID":"6650a3db-fdc1-4342-b8a8-cb91376e75c5","Type":"ContainerDied","Data":"8d2f457eb3a2c22efd1c5f686fd81de59f3738fe958c14608f0dae2bcba6c1cf"} Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.484063 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.484071 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d2f457eb3a2c22efd1c5f686fd81de59f3738fe958c14608f0dae2bcba6c1cf" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.658220 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44"] Feb 19 15:45:51 crc kubenswrapper[4810]: E0219 15:45:51.659307 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6650a3db-fdc1-4342-b8a8-cb91376e75c5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.659364 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6650a3db-fdc1-4342-b8a8-cb91376e75c5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 15:45:51 crc kubenswrapper[4810]: E0219 15:45:51.659417 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ebe856b-d546-48e1-862d-d9f039620b73" containerName="collect-profiles" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.659431 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ebe856b-d546-48e1-862d-d9f039620b73" containerName="collect-profiles" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.659885 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="6650a3db-fdc1-4342-b8a8-cb91376e75c5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.659913 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ebe856b-d546-48e1-862d-d9f039620b73" containerName="collect-profiles" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.661237 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.663663 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.664440 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.665440 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.666112 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.674080 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.712229 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44"] Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.748096 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.748203 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.748232 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpbdx\" (UniqueName: \"kubernetes.io/projected/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-kube-api-access-vpbdx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.748352 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.748691 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.852923 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.853110 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.853256 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.853365 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.853406 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpbdx\" (UniqueName: \"kubernetes.io/projected/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-kube-api-access-vpbdx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.857350 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.857467 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.857855 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.859867 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:51 crc kubenswrapper[4810]: I0219 15:45:51.875269 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpbdx\" (UniqueName: \"kubernetes.io/projected/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-kube-api-access-vpbdx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mxd44\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:52 crc kubenswrapper[4810]: I0219 15:45:52.016527 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:45:52 crc kubenswrapper[4810]: I0219 15:45:52.643858 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44"] Feb 19 15:45:53 crc kubenswrapper[4810]: I0219 15:45:53.525995 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" event={"ID":"b0d687e9-21b0-4abe-b7ec-4fb050926f6c","Type":"ContainerStarted","Data":"0b1bbaed8126375699a9e966b7745665836631416087e63a3422e04b7d8a2fdd"} Feb 19 15:45:53 crc kubenswrapper[4810]: I0219 15:45:53.526277 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" event={"ID":"b0d687e9-21b0-4abe-b7ec-4fb050926f6c","Type":"ContainerStarted","Data":"873b652283637e7d5e6deec7064b3a10014ad0fa5b6fb23a1fe74bd39abab9d5"} Feb 19 15:45:53 crc kubenswrapper[4810]: I0219 15:45:53.568892 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" podStartSLOduration=2.079083367 podStartE2EDuration="2.568864093s" podCreationTimestamp="2026-02-19 15:45:51 +0000 UTC" firstStartedPulling="2026-02-19 15:45:52.634624437 +0000 UTC m=+2182.116654571" lastFinishedPulling="2026-02-19 15:45:53.124405123 +0000 UTC m=+2182.606435297" observedRunningTime="2026-02-19 15:45:53.561811457 +0000 UTC m=+2183.043841581" watchObservedRunningTime="2026-02-19 15:45:53.568864093 +0000 UTC m=+2183.050894217" Feb 19 15:46:19 crc kubenswrapper[4810]: I0219 15:46:19.537828 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:46:19 crc kubenswrapper[4810]: I0219 15:46:19.538590 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:46:49 crc kubenswrapper[4810]: I0219 15:46:49.538295 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:46:49 crc kubenswrapper[4810]: I0219 15:46:49.539510 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:46:52 crc kubenswrapper[4810]: I0219 15:46:52.150844 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9mfnq"] Feb 19 15:46:52 crc kubenswrapper[4810]: I0219 15:46:52.156058 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:46:52 crc kubenswrapper[4810]: I0219 15:46:52.168040 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9mfnq"] Feb 19 15:46:52 crc kubenswrapper[4810]: I0219 15:46:52.252686 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2ndr\" (UniqueName: \"kubernetes.io/projected/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-kube-api-access-h2ndr\") pod \"community-operators-9mfnq\" (UID: \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\") " pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:46:52 crc kubenswrapper[4810]: I0219 15:46:52.252786 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-utilities\") pod \"community-operators-9mfnq\" (UID: \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\") " pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:46:52 crc kubenswrapper[4810]: I0219 15:46:52.252829 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-catalog-content\") pod \"community-operators-9mfnq\" (UID: \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\") " pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:46:52 crc kubenswrapper[4810]: I0219 15:46:52.355481 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2ndr\" (UniqueName: \"kubernetes.io/projected/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-kube-api-access-h2ndr\") pod \"community-operators-9mfnq\" (UID: \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\") " pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:46:52 crc kubenswrapper[4810]: I0219 15:46:52.355662 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-utilities\") pod \"community-operators-9mfnq\" (UID: \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\") " pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:46:52 crc kubenswrapper[4810]: I0219 15:46:52.355732 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-catalog-content\") pod \"community-operators-9mfnq\" (UID: \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\") " pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:46:52 crc kubenswrapper[4810]: I0219 15:46:52.356200 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-utilities\") pod \"community-operators-9mfnq\" (UID: \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\") " pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:46:52 crc kubenswrapper[4810]: I0219 15:46:52.356313 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-catalog-content\") pod \"community-operators-9mfnq\" (UID: \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\") " pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:46:52 crc kubenswrapper[4810]: I0219 15:46:52.376228 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2ndr\" (UniqueName: \"kubernetes.io/projected/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-kube-api-access-h2ndr\") pod \"community-operators-9mfnq\" (UID: \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\") " pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:46:52 crc kubenswrapper[4810]: I0219 15:46:52.501784 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:46:53 crc kubenswrapper[4810]: I0219 15:46:53.010782 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9mfnq"] Feb 19 15:46:53 crc kubenswrapper[4810]: I0219 15:46:53.221883 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mfnq" event={"ID":"c0c0cd7c-28e7-4c44-889e-7808fac96bfa","Type":"ContainerStarted","Data":"11f62df48e5277dbcf164f8317d7b4913a22f5e32ec26e5140bca32e8d987dcf"} Feb 19 15:46:54 crc kubenswrapper[4810]: I0219 15:46:54.237473 4810 generic.go:334] "Generic (PLEG): container finished" podID="c0c0cd7c-28e7-4c44-889e-7808fac96bfa" containerID="f27bee4fc68ef668f895a22d4bdc17a101cf723a973b64d06bcc45dc50a5efad" exitCode=0 Feb 19 15:46:54 crc kubenswrapper[4810]: I0219 15:46:54.237554 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mfnq" event={"ID":"c0c0cd7c-28e7-4c44-889e-7808fac96bfa","Type":"ContainerDied","Data":"f27bee4fc68ef668f895a22d4bdc17a101cf723a973b64d06bcc45dc50a5efad"} Feb 19 15:46:56 crc kubenswrapper[4810]: I0219 15:46:56.271907 4810 generic.go:334] "Generic (PLEG): container finished" podID="c0c0cd7c-28e7-4c44-889e-7808fac96bfa" containerID="d5e557007c609c199cf61b46011dbd18ca7056cb82f034ef80e7bbf27b827e63" exitCode=0 Feb 19 15:46:56 crc kubenswrapper[4810]: I0219 15:46:56.272062 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mfnq" event={"ID":"c0c0cd7c-28e7-4c44-889e-7808fac96bfa","Type":"ContainerDied","Data":"d5e557007c609c199cf61b46011dbd18ca7056cb82f034ef80e7bbf27b827e63"} Feb 19 15:46:57 crc kubenswrapper[4810]: I0219 15:46:57.286219 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mfnq" event={"ID":"c0c0cd7c-28e7-4c44-889e-7808fac96bfa","Type":"ContainerStarted","Data":"718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15"} Feb 19 15:46:57 crc kubenswrapper[4810]: I0219 15:46:57.315198 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9mfnq" podStartSLOduration=2.874103124 podStartE2EDuration="5.315177523s" podCreationTimestamp="2026-02-19 15:46:52 +0000 UTC" firstStartedPulling="2026-02-19 15:46:54.240283196 +0000 UTC m=+2243.722313350" lastFinishedPulling="2026-02-19 15:46:56.681357615 +0000 UTC m=+2246.163387749" observedRunningTime="2026-02-19 15:46:57.308166589 +0000 UTC m=+2246.790196713" watchObservedRunningTime="2026-02-19 15:46:57.315177523 +0000 UTC m=+2246.797207647" Feb 19 15:47:02 crc kubenswrapper[4810]: I0219 15:47:02.501940 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:47:02 crc kubenswrapper[4810]: I0219 15:47:02.502834 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:47:02 crc kubenswrapper[4810]: I0219 15:47:02.589506 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:47:03 crc kubenswrapper[4810]: I0219 15:47:03.429781 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:47:03 crc kubenswrapper[4810]: I0219 15:47:03.500126 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9mfnq"] Feb 19 15:47:05 crc kubenswrapper[4810]: I0219 15:47:05.382900 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9mfnq" podUID="c0c0cd7c-28e7-4c44-889e-7808fac96bfa" containerName="registry-server" containerID="cri-o://718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15" gracePeriod=2 Feb 19 15:47:05 crc kubenswrapper[4810]: E0219 15:47:05.684534 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0c0cd7c_28e7_4c44_889e_7808fac96bfa.slice/crio-conmon-718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0c0cd7c_28e7_4c44_889e_7808fac96bfa.slice/crio-718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15.scope\": RecentStats: unable to find data in memory cache]" Feb 19 15:47:05 crc kubenswrapper[4810]: I0219 15:47:05.907782 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.076469 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-catalog-content\") pod \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\" (UID: \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\") " Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.076667 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-utilities\") pod \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\" (UID: \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\") " Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.076708 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2ndr\" (UniqueName: \"kubernetes.io/projected/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-kube-api-access-h2ndr\") pod \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\" (UID: \"c0c0cd7c-28e7-4c44-889e-7808fac96bfa\") " Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.078243 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-utilities" (OuterVolumeSpecName: "utilities") pod "c0c0cd7c-28e7-4c44-889e-7808fac96bfa" (UID: "c0c0cd7c-28e7-4c44-889e-7808fac96bfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.087320 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-kube-api-access-h2ndr" (OuterVolumeSpecName: "kube-api-access-h2ndr") pod "c0c0cd7c-28e7-4c44-889e-7808fac96bfa" (UID: "c0c0cd7c-28e7-4c44-889e-7808fac96bfa"). InnerVolumeSpecName "kube-api-access-h2ndr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.133041 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0c0cd7c-28e7-4c44-889e-7808fac96bfa" (UID: "c0c0cd7c-28e7-4c44-889e-7808fac96bfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.179281 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.179602 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2ndr\" (UniqueName: \"kubernetes.io/projected/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-kube-api-access-h2ndr\") on node \"crc\" DevicePath \"\"" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.179708 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c0cd7c-28e7-4c44-889e-7808fac96bfa-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.398413 4810 generic.go:334] "Generic (PLEG): container finished" podID="c0c0cd7c-28e7-4c44-889e-7808fac96bfa" containerID="718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15" exitCode=0 Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.398474 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mfnq" event={"ID":"c0c0cd7c-28e7-4c44-889e-7808fac96bfa","Type":"ContainerDied","Data":"718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15"} Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.398525 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mfnq" event={"ID":"c0c0cd7c-28e7-4c44-889e-7808fac96bfa","Type":"ContainerDied","Data":"11f62df48e5277dbcf164f8317d7b4913a22f5e32ec26e5140bca32e8d987dcf"} Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.398588 4810 scope.go:117] "RemoveContainer" containerID="718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.400532 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mfnq" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.425899 4810 scope.go:117] "RemoveContainer" containerID="d5e557007c609c199cf61b46011dbd18ca7056cb82f034ef80e7bbf27b827e63" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.480909 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9mfnq"] Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.484970 4810 scope.go:117] "RemoveContainer" containerID="f27bee4fc68ef668f895a22d4bdc17a101cf723a973b64d06bcc45dc50a5efad" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.494404 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9mfnq"] Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.528374 4810 scope.go:117] "RemoveContainer" containerID="718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15" Feb 19 15:47:06 crc kubenswrapper[4810]: E0219 15:47:06.529219 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15\": container with ID starting with 718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15 not found: ID does not exist" containerID="718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.529267 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15"} err="failed to get container status \"718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15\": rpc error: code = NotFound desc = could not find container \"718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15\": container with ID starting with 718c5c7061ee5cf5c92fa1b9a0b876e92d585a0781ad5002a63205e1dd7b4e15 not found: ID does not exist" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.529299 4810 scope.go:117] "RemoveContainer" containerID="d5e557007c609c199cf61b46011dbd18ca7056cb82f034ef80e7bbf27b827e63" Feb 19 15:47:06 crc kubenswrapper[4810]: E0219 15:47:06.530109 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5e557007c609c199cf61b46011dbd18ca7056cb82f034ef80e7bbf27b827e63\": container with ID starting with d5e557007c609c199cf61b46011dbd18ca7056cb82f034ef80e7bbf27b827e63 not found: ID does not exist" containerID="d5e557007c609c199cf61b46011dbd18ca7056cb82f034ef80e7bbf27b827e63" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.530177 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5e557007c609c199cf61b46011dbd18ca7056cb82f034ef80e7bbf27b827e63"} err="failed to get container status \"d5e557007c609c199cf61b46011dbd18ca7056cb82f034ef80e7bbf27b827e63\": rpc error: code = NotFound desc = could not find container \"d5e557007c609c199cf61b46011dbd18ca7056cb82f034ef80e7bbf27b827e63\": container with ID starting with d5e557007c609c199cf61b46011dbd18ca7056cb82f034ef80e7bbf27b827e63 not found: ID does not exist" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.530220 4810 scope.go:117] "RemoveContainer" containerID="f27bee4fc68ef668f895a22d4bdc17a101cf723a973b64d06bcc45dc50a5efad" Feb 19 15:47:06 crc kubenswrapper[4810]: E0219 15:47:06.530627 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f27bee4fc68ef668f895a22d4bdc17a101cf723a973b64d06bcc45dc50a5efad\": container with ID starting with f27bee4fc68ef668f895a22d4bdc17a101cf723a973b64d06bcc45dc50a5efad not found: ID does not exist" containerID="f27bee4fc68ef668f895a22d4bdc17a101cf723a973b64d06bcc45dc50a5efad" Feb 19 15:47:06 crc kubenswrapper[4810]: I0219 15:47:06.530667 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27bee4fc68ef668f895a22d4bdc17a101cf723a973b64d06bcc45dc50a5efad"} err="failed to get container status \"f27bee4fc68ef668f895a22d4bdc17a101cf723a973b64d06bcc45dc50a5efad\": rpc error: code = NotFound desc = could not find container \"f27bee4fc68ef668f895a22d4bdc17a101cf723a973b64d06bcc45dc50a5efad\": container with ID starting with f27bee4fc68ef668f895a22d4bdc17a101cf723a973b64d06bcc45dc50a5efad not found: ID does not exist" Feb 19 15:47:07 crc kubenswrapper[4810]: I0219 15:47:07.453874 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c0cd7c-28e7-4c44-889e-7808fac96bfa" path="/var/lib/kubelet/pods/c0c0cd7c-28e7-4c44-889e-7808fac96bfa/volumes" Feb 19 15:47:19 crc kubenswrapper[4810]: I0219 15:47:19.537297 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:47:19 crc kubenswrapper[4810]: I0219 15:47:19.538682 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:47:19 crc kubenswrapper[4810]: I0219 15:47:19.538782 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:47:19 crc kubenswrapper[4810]: I0219 15:47:19.539585 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:47:19 crc kubenswrapper[4810]: I0219 15:47:19.539721 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" gracePeriod=600 Feb 19 15:47:19 crc kubenswrapper[4810]: E0219 15:47:19.662580 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:47:20 crc kubenswrapper[4810]: I0219 15:47:20.551758 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" exitCode=0 Feb 19 15:47:20 crc kubenswrapper[4810]: I0219 15:47:20.551864 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f"} Feb 19 15:47:20 crc kubenswrapper[4810]: I0219 15:47:20.552197 4810 scope.go:117] "RemoveContainer" containerID="4e3ef6a26491a979d9b15a4e163fa3567692b3b0eef18273908461c8a7758364" Feb 19 15:47:20 crc kubenswrapper[4810]: I0219 15:47:20.553071 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:47:20 crc kubenswrapper[4810]: E0219 15:47:20.553903 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:47:35 crc kubenswrapper[4810]: I0219 15:47:35.439431 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:47:35 crc kubenswrapper[4810]: E0219 15:47:35.440226 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:47:49 crc kubenswrapper[4810]: I0219 15:47:49.442440 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:47:49 crc kubenswrapper[4810]: E0219 15:47:49.444217 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:48:02 crc kubenswrapper[4810]: I0219 15:48:02.441155 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:48:02 crc kubenswrapper[4810]: E0219 15:48:02.442086 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:48:16 crc kubenswrapper[4810]: I0219 15:48:16.439561 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:48:16 crc kubenswrapper[4810]: E0219 15:48:16.440804 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:48:31 crc kubenswrapper[4810]: I0219 15:48:31.464530 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:48:31 crc kubenswrapper[4810]: E0219 15:48:31.465293 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:48:45 crc kubenswrapper[4810]: I0219 15:48:45.440749 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:48:45 crc kubenswrapper[4810]: E0219 15:48:45.441580 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:49:00 crc kubenswrapper[4810]: I0219 15:49:00.440609 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:49:00 crc kubenswrapper[4810]: E0219 15:49:00.441604 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:49:12 crc kubenswrapper[4810]: I0219 15:49:12.441466 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:49:12 crc kubenswrapper[4810]: E0219 15:49:12.442452 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:49:23 crc kubenswrapper[4810]: I0219 15:49:23.440528 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:49:23 crc kubenswrapper[4810]: E0219 15:49:23.441720 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:49:34 crc kubenswrapper[4810]: I0219 15:49:34.440282 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:49:34 crc kubenswrapper[4810]: E0219 15:49:34.442526 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:49:49 crc kubenswrapper[4810]: I0219 15:49:49.440246 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:49:49 crc kubenswrapper[4810]: E0219 15:49:49.441432 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:49:52 crc kubenswrapper[4810]: I0219 15:49:52.348311 4810 generic.go:334] "Generic (PLEG): container finished" podID="b0d687e9-21b0-4abe-b7ec-4fb050926f6c" containerID="0b1bbaed8126375699a9e966b7745665836631416087e63a3422e04b7d8a2fdd" exitCode=0 Feb 19 15:49:52 crc kubenswrapper[4810]: I0219 15:49:52.348379 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" event={"ID":"b0d687e9-21b0-4abe-b7ec-4fb050926f6c","Type":"ContainerDied","Data":"0b1bbaed8126375699a9e966b7745665836631416087e63a3422e04b7d8a2fdd"} Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.761321 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.861263 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpbdx\" (UniqueName: \"kubernetes.io/projected/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-kube-api-access-vpbdx\") pod \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.861512 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-ssh-key-openstack-edpm-ipam\") pod \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.861539 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-libvirt-combined-ca-bundle\") pod \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.861564 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-libvirt-secret-0\") pod \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.861656 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-inventory\") pod \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\" (UID: \"b0d687e9-21b0-4abe-b7ec-4fb050926f6c\") " Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.866803 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b0d687e9-21b0-4abe-b7ec-4fb050926f6c" (UID: "b0d687e9-21b0-4abe-b7ec-4fb050926f6c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.868251 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-kube-api-access-vpbdx" (OuterVolumeSpecName: "kube-api-access-vpbdx") pod "b0d687e9-21b0-4abe-b7ec-4fb050926f6c" (UID: "b0d687e9-21b0-4abe-b7ec-4fb050926f6c"). InnerVolumeSpecName "kube-api-access-vpbdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.891600 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "b0d687e9-21b0-4abe-b7ec-4fb050926f6c" (UID: "b0d687e9-21b0-4abe-b7ec-4fb050926f6c"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.895273 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-inventory" (OuterVolumeSpecName: "inventory") pod "b0d687e9-21b0-4abe-b7ec-4fb050926f6c" (UID: "b0d687e9-21b0-4abe-b7ec-4fb050926f6c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.895507 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b0d687e9-21b0-4abe-b7ec-4fb050926f6c" (UID: "b0d687e9-21b0-4abe-b7ec-4fb050926f6c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.964859 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.964902 4810 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.964915 4810 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.964928 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:49:53 crc kubenswrapper[4810]: I0219 15:49:53.964939 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpbdx\" (UniqueName: \"kubernetes.io/projected/b0d687e9-21b0-4abe-b7ec-4fb050926f6c-kube-api-access-vpbdx\") on node \"crc\" DevicePath \"\"" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.373583 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" event={"ID":"b0d687e9-21b0-4abe-b7ec-4fb050926f6c","Type":"ContainerDied","Data":"873b652283637e7d5e6deec7064b3a10014ad0fa5b6fb23a1fe74bd39abab9d5"} Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.374145 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="873b652283637e7d5e6deec7064b3a10014ad0fa5b6fb23a1fe74bd39abab9d5" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.373668 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mxd44" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.562207 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh"] Feb 19 15:49:54 crc kubenswrapper[4810]: E0219 15:49:54.562907 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d687e9-21b0-4abe-b7ec-4fb050926f6c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.562965 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d687e9-21b0-4abe-b7ec-4fb050926f6c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 15:49:54 crc kubenswrapper[4810]: E0219 15:49:54.562995 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c0cd7c-28e7-4c44-889e-7808fac96bfa" containerName="registry-server" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.563009 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c0cd7c-28e7-4c44-889e-7808fac96bfa" containerName="registry-server" Feb 19 15:49:54 crc kubenswrapper[4810]: E0219 15:49:54.563069 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c0cd7c-28e7-4c44-889e-7808fac96bfa" containerName="extract-utilities" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.563084 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c0cd7c-28e7-4c44-889e-7808fac96bfa" containerName="extract-utilities" Feb 19 15:49:54 crc kubenswrapper[4810]: E0219 15:49:54.563108 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c0cd7c-28e7-4c44-889e-7808fac96bfa" containerName="extract-content" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.563120 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c0cd7c-28e7-4c44-889e-7808fac96bfa" containerName="extract-content" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.563501 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c0cd7c-28e7-4c44-889e-7808fac96bfa" containerName="registry-server" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.563534 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d687e9-21b0-4abe-b7ec-4fb050926f6c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.564753 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.568943 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.569096 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.570026 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.571716 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.572070 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.573574 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh"] Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.574879 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.575978 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.677757 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.677844 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.677875 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.677911 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.677959 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.678068 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x224k\" (UniqueName: \"kubernetes.io/projected/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-kube-api-access-x224k\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.678100 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.678125 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.678186 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.678264 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.678306 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.780096 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.780154 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.780187 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.780238 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.780293 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x224k\" (UniqueName: \"kubernetes.io/projected/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-kube-api-access-x224k\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.780553 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.780807 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.780882 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.780972 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.781016 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.781057 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.783399 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.785581 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.785582 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.785611 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.786040 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.786461 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.787576 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.792944 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.793307 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.793592 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.802175 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x224k\" (UniqueName: \"kubernetes.io/projected/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-kube-api-access-x224k\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nv8wh\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:54 crc kubenswrapper[4810]: I0219 15:49:54.884625 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:49:55 crc kubenswrapper[4810]: I0219 15:49:55.422389 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh"] Feb 19 15:49:55 crc kubenswrapper[4810]: W0219 15:49:55.429012 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc5014f8_e5aa_47ad_8787_c187b0f7f0e1.slice/crio-ba6d44e109e65ba6e824f2d3f6c56a0eb16fcbd5ac7ce5cace72d39335df678a WatchSource:0}: Error finding container ba6d44e109e65ba6e824f2d3f6c56a0eb16fcbd5ac7ce5cace72d39335df678a: Status 404 returned error can't find the container with id ba6d44e109e65ba6e824f2d3f6c56a0eb16fcbd5ac7ce5cace72d39335df678a Feb 19 15:49:55 crc kubenswrapper[4810]: I0219 15:49:55.432284 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:49:56 crc kubenswrapper[4810]: I0219 15:49:56.399310 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" event={"ID":"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1","Type":"ContainerStarted","Data":"2e87240d488a9e3fc53ca295a40f539e65e717cb89ae7e5a0bdf479e1de61898"} Feb 19 15:49:56 crc kubenswrapper[4810]: I0219 15:49:56.399674 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" event={"ID":"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1","Type":"ContainerStarted","Data":"ba6d44e109e65ba6e824f2d3f6c56a0eb16fcbd5ac7ce5cace72d39335df678a"} Feb 19 15:49:56 crc kubenswrapper[4810]: I0219 15:49:56.446673 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" podStartSLOduration=1.976681879 podStartE2EDuration="2.446640242s" podCreationTimestamp="2026-02-19 15:49:54 +0000 UTC" firstStartedPulling="2026-02-19 15:49:55.43203918 +0000 UTC m=+2424.914069294" lastFinishedPulling="2026-02-19 15:49:55.901997523 +0000 UTC m=+2425.384027657" observedRunningTime="2026-02-19 15:49:56.428610404 +0000 UTC m=+2425.910640528" watchObservedRunningTime="2026-02-19 15:49:56.446640242 +0000 UTC m=+2425.928670416" Feb 19 15:50:03 crc kubenswrapper[4810]: I0219 15:50:03.439976 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:50:03 crc kubenswrapper[4810]: E0219 15:50:03.441698 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:50:10 crc kubenswrapper[4810]: I0219 15:50:10.123782 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fc5r4"] Feb 19 15:50:10 crc kubenswrapper[4810]: I0219 15:50:10.128584 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:10 crc kubenswrapper[4810]: I0219 15:50:10.138149 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fc5r4"] Feb 19 15:50:10 crc kubenswrapper[4810]: I0219 15:50:10.288344 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-catalog-content\") pod \"certified-operators-fc5r4\" (UID: \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\") " pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:10 crc kubenswrapper[4810]: I0219 15:50:10.288689 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sm7d\" (UniqueName: \"kubernetes.io/projected/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-kube-api-access-8sm7d\") pod \"certified-operators-fc5r4\" (UID: \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\") " pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:10 crc kubenswrapper[4810]: I0219 15:50:10.288962 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-utilities\") pod \"certified-operators-fc5r4\" (UID: \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\") " pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:10 crc kubenswrapper[4810]: I0219 15:50:10.390708 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-utilities\") pod \"certified-operators-fc5r4\" (UID: \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\") " pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:10 crc kubenswrapper[4810]: I0219 15:50:10.390800 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-catalog-content\") pod \"certified-operators-fc5r4\" (UID: \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\") " pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:10 crc kubenswrapper[4810]: I0219 15:50:10.390886 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sm7d\" (UniqueName: \"kubernetes.io/projected/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-kube-api-access-8sm7d\") pod \"certified-operators-fc5r4\" (UID: \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\") " pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:10 crc kubenswrapper[4810]: I0219 15:50:10.391360 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-utilities\") pod \"certified-operators-fc5r4\" (UID: \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\") " pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:10 crc kubenswrapper[4810]: I0219 15:50:10.391369 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-catalog-content\") pod \"certified-operators-fc5r4\" (UID: \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\") " pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:10 crc kubenswrapper[4810]: I0219 15:50:10.413651 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sm7d\" (UniqueName: \"kubernetes.io/projected/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-kube-api-access-8sm7d\") pod \"certified-operators-fc5r4\" (UID: \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\") " pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:10 crc kubenswrapper[4810]: I0219 15:50:10.462423 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:11 crc kubenswrapper[4810]: W0219 15:50:11.003031 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b7ad7c_cc02_4a75_8ac7_47efd18067b9.slice/crio-c43fde9c5e8b2fa870f56fd8d2482a1979719f8bc8a8500731f79f842e1747c0 WatchSource:0}: Error finding container c43fde9c5e8b2fa870f56fd8d2482a1979719f8bc8a8500731f79f842e1747c0: Status 404 returned error can't find the container with id c43fde9c5e8b2fa870f56fd8d2482a1979719f8bc8a8500731f79f842e1747c0 Feb 19 15:50:11 crc kubenswrapper[4810]: I0219 15:50:11.004685 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fc5r4"] Feb 19 15:50:11 crc kubenswrapper[4810]: I0219 15:50:11.579583 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" containerID="4aca170e13e53204b5023bdecba116a7203aa5391a1b34fd52cfc2d7f859375e" exitCode=0 Feb 19 15:50:11 crc kubenswrapper[4810]: I0219 15:50:11.579744 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc5r4" event={"ID":"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9","Type":"ContainerDied","Data":"4aca170e13e53204b5023bdecba116a7203aa5391a1b34fd52cfc2d7f859375e"} Feb 19 15:50:11 crc kubenswrapper[4810]: I0219 15:50:11.579952 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc5r4" event={"ID":"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9","Type":"ContainerStarted","Data":"c43fde9c5e8b2fa870f56fd8d2482a1979719f8bc8a8500731f79f842e1747c0"} Feb 19 15:50:11 crc kubenswrapper[4810]: I0219 15:50:11.925385 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4gb7b"] Feb 19 15:50:11 crc kubenswrapper[4810]: I0219 15:50:11.928792 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:11 crc kubenswrapper[4810]: I0219 15:50:11.935150 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4gb7b"] Feb 19 15:50:12 crc kubenswrapper[4810]: I0219 15:50:12.019349 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505f9989-9548-4391-b758-33ef9484f145-utilities\") pod \"redhat-operators-4gb7b\" (UID: \"505f9989-9548-4391-b758-33ef9484f145\") " pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:12 crc kubenswrapper[4810]: I0219 15:50:12.019890 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505f9989-9548-4391-b758-33ef9484f145-catalog-content\") pod \"redhat-operators-4gb7b\" (UID: \"505f9989-9548-4391-b758-33ef9484f145\") " pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:12 crc kubenswrapper[4810]: I0219 15:50:12.020659 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwzp4\" (UniqueName: \"kubernetes.io/projected/505f9989-9548-4391-b758-33ef9484f145-kube-api-access-pwzp4\") pod \"redhat-operators-4gb7b\" (UID: \"505f9989-9548-4391-b758-33ef9484f145\") " pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:12 crc kubenswrapper[4810]: I0219 15:50:12.122811 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwzp4\" (UniqueName: \"kubernetes.io/projected/505f9989-9548-4391-b758-33ef9484f145-kube-api-access-pwzp4\") pod \"redhat-operators-4gb7b\" (UID: \"505f9989-9548-4391-b758-33ef9484f145\") " pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:12 crc kubenswrapper[4810]: I0219 15:50:12.122883 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505f9989-9548-4391-b758-33ef9484f145-utilities\") pod \"redhat-operators-4gb7b\" (UID: \"505f9989-9548-4391-b758-33ef9484f145\") " pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:12 crc kubenswrapper[4810]: I0219 15:50:12.122990 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505f9989-9548-4391-b758-33ef9484f145-catalog-content\") pod \"redhat-operators-4gb7b\" (UID: \"505f9989-9548-4391-b758-33ef9484f145\") " pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:12 crc kubenswrapper[4810]: I0219 15:50:12.123585 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505f9989-9548-4391-b758-33ef9484f145-catalog-content\") pod \"redhat-operators-4gb7b\" (UID: \"505f9989-9548-4391-b758-33ef9484f145\") " pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:12 crc kubenswrapper[4810]: I0219 15:50:12.124182 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505f9989-9548-4391-b758-33ef9484f145-utilities\") pod \"redhat-operators-4gb7b\" (UID: \"505f9989-9548-4391-b758-33ef9484f145\") " pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:12 crc kubenswrapper[4810]: I0219 15:50:12.142377 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwzp4\" (UniqueName: \"kubernetes.io/projected/505f9989-9548-4391-b758-33ef9484f145-kube-api-access-pwzp4\") pod \"redhat-operators-4gb7b\" (UID: \"505f9989-9548-4391-b758-33ef9484f145\") " pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:12 crc kubenswrapper[4810]: I0219 15:50:12.276541 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:12 crc kubenswrapper[4810]: I0219 15:50:12.592876 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc5r4" event={"ID":"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9","Type":"ContainerStarted","Data":"8510db2e629e8bb213e4a8889c2e19ef5716ff9167ee722bb255cd38156b4002"} Feb 19 15:50:12 crc kubenswrapper[4810]: I0219 15:50:12.764134 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4gb7b"] Feb 19 15:50:13 crc kubenswrapper[4810]: I0219 15:50:13.603384 4810 generic.go:334] "Generic (PLEG): container finished" podID="505f9989-9548-4391-b758-33ef9484f145" containerID="98c0f22403cbf6d1afd69ca2613abb589784f884be7c977062718a384a79f1d6" exitCode=0 Feb 19 15:50:13 crc kubenswrapper[4810]: I0219 15:50:13.603457 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4gb7b" event={"ID":"505f9989-9548-4391-b758-33ef9484f145","Type":"ContainerDied","Data":"98c0f22403cbf6d1afd69ca2613abb589784f884be7c977062718a384a79f1d6"} Feb 19 15:50:13 crc kubenswrapper[4810]: I0219 15:50:13.603727 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4gb7b" event={"ID":"505f9989-9548-4391-b758-33ef9484f145","Type":"ContainerStarted","Data":"8abdba986b1625129435e6790b7e3757e1e286235597f9e2b12f7877f6574c61"} Feb 19 15:50:13 crc kubenswrapper[4810]: I0219 15:50:13.605584 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" containerID="8510db2e629e8bb213e4a8889c2e19ef5716ff9167ee722bb255cd38156b4002" exitCode=0 Feb 19 15:50:13 crc kubenswrapper[4810]: I0219 15:50:13.605610 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc5r4" event={"ID":"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9","Type":"ContainerDied","Data":"8510db2e629e8bb213e4a8889c2e19ef5716ff9167ee722bb255cd38156b4002"} Feb 19 15:50:14 crc kubenswrapper[4810]: I0219 15:50:14.618087 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc5r4" event={"ID":"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9","Type":"ContainerStarted","Data":"7100a0cce48fd72efe5710c85a2b6e7a60fd1424a4a404164ae5118372e83f43"} Feb 19 15:50:14 crc kubenswrapper[4810]: I0219 15:50:14.651824 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fc5r4" podStartSLOduration=2.231688478 podStartE2EDuration="4.651787389s" podCreationTimestamp="2026-02-19 15:50:10 +0000 UTC" firstStartedPulling="2026-02-19 15:50:11.584861898 +0000 UTC m=+2441.066892062" lastFinishedPulling="2026-02-19 15:50:14.004960839 +0000 UTC m=+2443.486990973" observedRunningTime="2026-02-19 15:50:14.643189665 +0000 UTC m=+2444.125219859" watchObservedRunningTime="2026-02-19 15:50:14.651787389 +0000 UTC m=+2444.133817553" Feb 19 15:50:15 crc kubenswrapper[4810]: I0219 15:50:15.630980 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4gb7b" event={"ID":"505f9989-9548-4391-b758-33ef9484f145","Type":"ContainerStarted","Data":"9156315524b62b8e2dc59d942645a9530f1aefb6429d80d2e3bb2c9ad987cb72"} Feb 19 15:50:16 crc kubenswrapper[4810]: I0219 15:50:16.440182 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:50:16 crc kubenswrapper[4810]: E0219 15:50:16.441016 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:50:17 crc kubenswrapper[4810]: I0219 15:50:17.655527 4810 generic.go:334] "Generic (PLEG): container finished" podID="505f9989-9548-4391-b758-33ef9484f145" containerID="9156315524b62b8e2dc59d942645a9530f1aefb6429d80d2e3bb2c9ad987cb72" exitCode=0 Feb 19 15:50:17 crc kubenswrapper[4810]: I0219 15:50:17.655595 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4gb7b" event={"ID":"505f9989-9548-4391-b758-33ef9484f145","Type":"ContainerDied","Data":"9156315524b62b8e2dc59d942645a9530f1aefb6429d80d2e3bb2c9ad987cb72"} Feb 19 15:50:19 crc kubenswrapper[4810]: I0219 15:50:19.687043 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4gb7b" event={"ID":"505f9989-9548-4391-b758-33ef9484f145","Type":"ContainerStarted","Data":"d13f04dd8f83d8f3b49c862c6aa6e98dfeeac1f7ae0d8ec94ede6a74195918b3"} Feb 19 15:50:19 crc kubenswrapper[4810]: I0219 15:50:19.727529 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4gb7b" podStartSLOduration=3.917432384 podStartE2EDuration="8.727501448s" podCreationTimestamp="2026-02-19 15:50:11 +0000 UTC" firstStartedPulling="2026-02-19 15:50:13.604950266 +0000 UTC m=+2443.086980400" lastFinishedPulling="2026-02-19 15:50:18.41501932 +0000 UTC m=+2447.897049464" observedRunningTime="2026-02-19 15:50:19.715706374 +0000 UTC m=+2449.197736508" watchObservedRunningTime="2026-02-19 15:50:19.727501448 +0000 UTC m=+2449.209531612" Feb 19 15:50:20 crc kubenswrapper[4810]: I0219 15:50:20.463203 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:20 crc kubenswrapper[4810]: I0219 15:50:20.463806 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:20 crc kubenswrapper[4810]: I0219 15:50:20.524090 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:20 crc kubenswrapper[4810]: I0219 15:50:20.758043 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:21 crc kubenswrapper[4810]: I0219 15:50:21.906305 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fc5r4"] Feb 19 15:50:22 crc kubenswrapper[4810]: I0219 15:50:22.278234 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:22 crc kubenswrapper[4810]: I0219 15:50:22.278623 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:23 crc kubenswrapper[4810]: I0219 15:50:23.348029 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4gb7b" podUID="505f9989-9548-4391-b758-33ef9484f145" containerName="registry-server" probeResult="failure" output=< Feb 19 15:50:23 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 15:50:23 crc kubenswrapper[4810]: > Feb 19 15:50:23 crc kubenswrapper[4810]: I0219 15:50:23.741812 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fc5r4" podUID="f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" containerName="registry-server" containerID="cri-o://7100a0cce48fd72efe5710c85a2b6e7a60fd1424a4a404164ae5118372e83f43" gracePeriod=2 Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.328050 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.413068 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-catalog-content\") pod \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\" (UID: \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\") " Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.413159 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-utilities\") pod \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\" (UID: \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\") " Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.413249 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sm7d\" (UniqueName: \"kubernetes.io/projected/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-kube-api-access-8sm7d\") pod \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\" (UID: \"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9\") " Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.414137 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-utilities" (OuterVolumeSpecName: "utilities") pod "f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" (UID: "f4b7ad7c-cc02-4a75-8ac7-47efd18067b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.426658 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-kube-api-access-8sm7d" (OuterVolumeSpecName: "kube-api-access-8sm7d") pod "f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" (UID: "f4b7ad7c-cc02-4a75-8ac7-47efd18067b9"). InnerVolumeSpecName "kube-api-access-8sm7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.485979 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" (UID: "f4b7ad7c-cc02-4a75-8ac7-47efd18067b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.515971 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.516018 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.516031 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sm7d\" (UniqueName: \"kubernetes.io/projected/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9-kube-api-access-8sm7d\") on node \"crc\" DevicePath \"\"" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.754583 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" containerID="7100a0cce48fd72efe5710c85a2b6e7a60fd1424a4a404164ae5118372e83f43" exitCode=0 Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.754620 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc5r4" event={"ID":"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9","Type":"ContainerDied","Data":"7100a0cce48fd72efe5710c85a2b6e7a60fd1424a4a404164ae5118372e83f43"} Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.755056 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fc5r4" event={"ID":"f4b7ad7c-cc02-4a75-8ac7-47efd18067b9","Type":"ContainerDied","Data":"c43fde9c5e8b2fa870f56fd8d2482a1979719f8bc8a8500731f79f842e1747c0"} Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.754628 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fc5r4" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.755085 4810 scope.go:117] "RemoveContainer" containerID="7100a0cce48fd72efe5710c85a2b6e7a60fd1424a4a404164ae5118372e83f43" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.783620 4810 scope.go:117] "RemoveContainer" containerID="8510db2e629e8bb213e4a8889c2e19ef5716ff9167ee722bb255cd38156b4002" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.801215 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fc5r4"] Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.808816 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fc5r4"] Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.833611 4810 scope.go:117] "RemoveContainer" containerID="4aca170e13e53204b5023bdecba116a7203aa5391a1b34fd52cfc2d7f859375e" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.861640 4810 scope.go:117] "RemoveContainer" containerID="7100a0cce48fd72efe5710c85a2b6e7a60fd1424a4a404164ae5118372e83f43" Feb 19 15:50:24 crc kubenswrapper[4810]: E0219 15:50:24.862388 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7100a0cce48fd72efe5710c85a2b6e7a60fd1424a4a404164ae5118372e83f43\": container with ID starting with 7100a0cce48fd72efe5710c85a2b6e7a60fd1424a4a404164ae5118372e83f43 not found: ID does not exist" containerID="7100a0cce48fd72efe5710c85a2b6e7a60fd1424a4a404164ae5118372e83f43" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.862535 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7100a0cce48fd72efe5710c85a2b6e7a60fd1424a4a404164ae5118372e83f43"} err="failed to get container status \"7100a0cce48fd72efe5710c85a2b6e7a60fd1424a4a404164ae5118372e83f43\": rpc error: code = NotFound desc = could not find container \"7100a0cce48fd72efe5710c85a2b6e7a60fd1424a4a404164ae5118372e83f43\": container with ID starting with 7100a0cce48fd72efe5710c85a2b6e7a60fd1424a4a404164ae5118372e83f43 not found: ID does not exist" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.862653 4810 scope.go:117] "RemoveContainer" containerID="8510db2e629e8bb213e4a8889c2e19ef5716ff9167ee722bb255cd38156b4002" Feb 19 15:50:24 crc kubenswrapper[4810]: E0219 15:50:24.863172 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8510db2e629e8bb213e4a8889c2e19ef5716ff9167ee722bb255cd38156b4002\": container with ID starting with 8510db2e629e8bb213e4a8889c2e19ef5716ff9167ee722bb255cd38156b4002 not found: ID does not exist" containerID="8510db2e629e8bb213e4a8889c2e19ef5716ff9167ee722bb255cd38156b4002" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.863205 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8510db2e629e8bb213e4a8889c2e19ef5716ff9167ee722bb255cd38156b4002"} err="failed to get container status \"8510db2e629e8bb213e4a8889c2e19ef5716ff9167ee722bb255cd38156b4002\": rpc error: code = NotFound desc = could not find container \"8510db2e629e8bb213e4a8889c2e19ef5716ff9167ee722bb255cd38156b4002\": container with ID starting with 8510db2e629e8bb213e4a8889c2e19ef5716ff9167ee722bb255cd38156b4002 not found: ID does not exist" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.863229 4810 scope.go:117] "RemoveContainer" containerID="4aca170e13e53204b5023bdecba116a7203aa5391a1b34fd52cfc2d7f859375e" Feb 19 15:50:24 crc kubenswrapper[4810]: E0219 15:50:24.863715 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aca170e13e53204b5023bdecba116a7203aa5391a1b34fd52cfc2d7f859375e\": container with ID starting with 4aca170e13e53204b5023bdecba116a7203aa5391a1b34fd52cfc2d7f859375e not found: ID does not exist" containerID="4aca170e13e53204b5023bdecba116a7203aa5391a1b34fd52cfc2d7f859375e" Feb 19 15:50:24 crc kubenswrapper[4810]: I0219 15:50:24.863821 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aca170e13e53204b5023bdecba116a7203aa5391a1b34fd52cfc2d7f859375e"} err="failed to get container status \"4aca170e13e53204b5023bdecba116a7203aa5391a1b34fd52cfc2d7f859375e\": rpc error: code = NotFound desc = could not find container \"4aca170e13e53204b5023bdecba116a7203aa5391a1b34fd52cfc2d7f859375e\": container with ID starting with 4aca170e13e53204b5023bdecba116a7203aa5391a1b34fd52cfc2d7f859375e not found: ID does not exist" Feb 19 15:50:25 crc kubenswrapper[4810]: I0219 15:50:25.457424 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" path="/var/lib/kubelet/pods/f4b7ad7c-cc02-4a75-8ac7-47efd18067b9/volumes" Feb 19 15:50:29 crc kubenswrapper[4810]: I0219 15:50:29.439906 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:50:29 crc kubenswrapper[4810]: E0219 15:50:29.441113 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:50:32 crc kubenswrapper[4810]: I0219 15:50:32.329454 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:32 crc kubenswrapper[4810]: I0219 15:50:32.392965 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:32 crc kubenswrapper[4810]: I0219 15:50:32.564152 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4gb7b"] Feb 19 15:50:33 crc kubenswrapper[4810]: I0219 15:50:33.900147 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4gb7b" podUID="505f9989-9548-4391-b758-33ef9484f145" containerName="registry-server" containerID="cri-o://d13f04dd8f83d8f3b49c862c6aa6e98dfeeac1f7ae0d8ec94ede6a74195918b3" gracePeriod=2 Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.428458 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.548239 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505f9989-9548-4391-b758-33ef9484f145-utilities\") pod \"505f9989-9548-4391-b758-33ef9484f145\" (UID: \"505f9989-9548-4391-b758-33ef9484f145\") " Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.548288 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505f9989-9548-4391-b758-33ef9484f145-catalog-content\") pod \"505f9989-9548-4391-b758-33ef9484f145\" (UID: \"505f9989-9548-4391-b758-33ef9484f145\") " Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.548347 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwzp4\" (UniqueName: \"kubernetes.io/projected/505f9989-9548-4391-b758-33ef9484f145-kube-api-access-pwzp4\") pod \"505f9989-9548-4391-b758-33ef9484f145\" (UID: \"505f9989-9548-4391-b758-33ef9484f145\") " Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.550968 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505f9989-9548-4391-b758-33ef9484f145-utilities" (OuterVolumeSpecName: "utilities") pod "505f9989-9548-4391-b758-33ef9484f145" (UID: "505f9989-9548-4391-b758-33ef9484f145"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.556851 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/505f9989-9548-4391-b758-33ef9484f145-kube-api-access-pwzp4" (OuterVolumeSpecName: "kube-api-access-pwzp4") pod "505f9989-9548-4391-b758-33ef9484f145" (UID: "505f9989-9548-4391-b758-33ef9484f145"). InnerVolumeSpecName "kube-api-access-pwzp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.651232 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505f9989-9548-4391-b758-33ef9484f145-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.651281 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwzp4\" (UniqueName: \"kubernetes.io/projected/505f9989-9548-4391-b758-33ef9484f145-kube-api-access-pwzp4\") on node \"crc\" DevicePath \"\"" Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.716624 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505f9989-9548-4391-b758-33ef9484f145-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "505f9989-9548-4391-b758-33ef9484f145" (UID: "505f9989-9548-4391-b758-33ef9484f145"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.752823 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505f9989-9548-4391-b758-33ef9484f145-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.915442 4810 generic.go:334] "Generic (PLEG): container finished" podID="505f9989-9548-4391-b758-33ef9484f145" containerID="d13f04dd8f83d8f3b49c862c6aa6e98dfeeac1f7ae0d8ec94ede6a74195918b3" exitCode=0 Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.915509 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4gb7b" event={"ID":"505f9989-9548-4391-b758-33ef9484f145","Type":"ContainerDied","Data":"d13f04dd8f83d8f3b49c862c6aa6e98dfeeac1f7ae0d8ec94ede6a74195918b3"} Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.915566 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4gb7b" event={"ID":"505f9989-9548-4391-b758-33ef9484f145","Type":"ContainerDied","Data":"8abdba986b1625129435e6790b7e3757e1e286235597f9e2b12f7877f6574c61"} Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.915574 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4gb7b" Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.915595 4810 scope.go:117] "RemoveContainer" containerID="d13f04dd8f83d8f3b49c862c6aa6e98dfeeac1f7ae0d8ec94ede6a74195918b3" Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.945732 4810 scope.go:117] "RemoveContainer" containerID="9156315524b62b8e2dc59d942645a9530f1aefb6429d80d2e3bb2c9ad987cb72" Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.975597 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4gb7b"] Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.990189 4810 scope.go:117] "RemoveContainer" containerID="98c0f22403cbf6d1afd69ca2613abb589784f884be7c977062718a384a79f1d6" Feb 19 15:50:34 crc kubenswrapper[4810]: I0219 15:50:34.992195 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4gb7b"] Feb 19 15:50:35 crc kubenswrapper[4810]: I0219 15:50:35.048782 4810 scope.go:117] "RemoveContainer" containerID="d13f04dd8f83d8f3b49c862c6aa6e98dfeeac1f7ae0d8ec94ede6a74195918b3" Feb 19 15:50:35 crc kubenswrapper[4810]: E0219 15:50:35.049363 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d13f04dd8f83d8f3b49c862c6aa6e98dfeeac1f7ae0d8ec94ede6a74195918b3\": container with ID starting with d13f04dd8f83d8f3b49c862c6aa6e98dfeeac1f7ae0d8ec94ede6a74195918b3 not found: ID does not exist" containerID="d13f04dd8f83d8f3b49c862c6aa6e98dfeeac1f7ae0d8ec94ede6a74195918b3" Feb 19 15:50:35 crc kubenswrapper[4810]: I0219 15:50:35.049411 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d13f04dd8f83d8f3b49c862c6aa6e98dfeeac1f7ae0d8ec94ede6a74195918b3"} err="failed to get container status \"d13f04dd8f83d8f3b49c862c6aa6e98dfeeac1f7ae0d8ec94ede6a74195918b3\": rpc error: code = NotFound desc = could not find container \"d13f04dd8f83d8f3b49c862c6aa6e98dfeeac1f7ae0d8ec94ede6a74195918b3\": container with ID starting with d13f04dd8f83d8f3b49c862c6aa6e98dfeeac1f7ae0d8ec94ede6a74195918b3 not found: ID does not exist" Feb 19 15:50:35 crc kubenswrapper[4810]: I0219 15:50:35.049443 4810 scope.go:117] "RemoveContainer" containerID="9156315524b62b8e2dc59d942645a9530f1aefb6429d80d2e3bb2c9ad987cb72" Feb 19 15:50:35 crc kubenswrapper[4810]: E0219 15:50:35.049811 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9156315524b62b8e2dc59d942645a9530f1aefb6429d80d2e3bb2c9ad987cb72\": container with ID starting with 9156315524b62b8e2dc59d942645a9530f1aefb6429d80d2e3bb2c9ad987cb72 not found: ID does not exist" containerID="9156315524b62b8e2dc59d942645a9530f1aefb6429d80d2e3bb2c9ad987cb72" Feb 19 15:50:35 crc kubenswrapper[4810]: I0219 15:50:35.049854 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9156315524b62b8e2dc59d942645a9530f1aefb6429d80d2e3bb2c9ad987cb72"} err="failed to get container status \"9156315524b62b8e2dc59d942645a9530f1aefb6429d80d2e3bb2c9ad987cb72\": rpc error: code = NotFound desc = could not find container \"9156315524b62b8e2dc59d942645a9530f1aefb6429d80d2e3bb2c9ad987cb72\": container with ID starting with 9156315524b62b8e2dc59d942645a9530f1aefb6429d80d2e3bb2c9ad987cb72 not found: ID does not exist" Feb 19 15:50:35 crc kubenswrapper[4810]: I0219 15:50:35.049895 4810 scope.go:117] "RemoveContainer" containerID="98c0f22403cbf6d1afd69ca2613abb589784f884be7c977062718a384a79f1d6" Feb 19 15:50:35 crc kubenswrapper[4810]: E0219 15:50:35.050222 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c0f22403cbf6d1afd69ca2613abb589784f884be7c977062718a384a79f1d6\": container with ID starting with 98c0f22403cbf6d1afd69ca2613abb589784f884be7c977062718a384a79f1d6 not found: ID does not exist" containerID="98c0f22403cbf6d1afd69ca2613abb589784f884be7c977062718a384a79f1d6" Feb 19 15:50:35 crc kubenswrapper[4810]: I0219 15:50:35.050254 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c0f22403cbf6d1afd69ca2613abb589784f884be7c977062718a384a79f1d6"} err="failed to get container status \"98c0f22403cbf6d1afd69ca2613abb589784f884be7c977062718a384a79f1d6\": rpc error: code = NotFound desc = could not find container \"98c0f22403cbf6d1afd69ca2613abb589784f884be7c977062718a384a79f1d6\": container with ID starting with 98c0f22403cbf6d1afd69ca2613abb589784f884be7c977062718a384a79f1d6 not found: ID does not exist" Feb 19 15:50:35 crc kubenswrapper[4810]: I0219 15:50:35.454632 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="505f9989-9548-4391-b758-33ef9484f145" path="/var/lib/kubelet/pods/505f9989-9548-4391-b758-33ef9484f145/volumes" Feb 19 15:50:40 crc kubenswrapper[4810]: I0219 15:50:40.439639 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:50:40 crc kubenswrapper[4810]: E0219 15:50:40.440621 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:50:51 crc kubenswrapper[4810]: I0219 15:50:51.455222 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:50:51 crc kubenswrapper[4810]: E0219 15:50:51.456589 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:51:05 crc kubenswrapper[4810]: I0219 15:51:05.441862 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:51:05 crc kubenswrapper[4810]: E0219 15:51:05.442852 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:51:16 crc kubenswrapper[4810]: I0219 15:51:16.439664 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:51:16 crc kubenswrapper[4810]: E0219 15:51:16.440544 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:51:30 crc kubenswrapper[4810]: I0219 15:51:30.439288 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:51:30 crc kubenswrapper[4810]: E0219 15:51:30.440259 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:51:41 crc kubenswrapper[4810]: I0219 15:51:41.444899 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:51:41 crc kubenswrapper[4810]: E0219 15:51:41.445876 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:51:53 crc kubenswrapper[4810]: I0219 15:51:53.442640 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:51:53 crc kubenswrapper[4810]: E0219 15:51:53.443456 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:52:07 crc kubenswrapper[4810]: I0219 15:52:07.440705 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:52:07 crc kubenswrapper[4810]: E0219 15:52:07.441798 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:52:22 crc kubenswrapper[4810]: I0219 15:52:22.440603 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:52:23 crc kubenswrapper[4810]: I0219 15:52:23.149289 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"6287fea84a36b36e84360e8242ff92d8a387529e6d7bde8d2a6410fbcb262896"} Feb 19 15:52:28 crc kubenswrapper[4810]: I0219 15:52:28.218982 4810 generic.go:334] "Generic (PLEG): container finished" podID="cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" containerID="2e87240d488a9e3fc53ca295a40f539e65e717cb89ae7e5a0bdf479e1de61898" exitCode=0 Feb 19 15:52:28 crc kubenswrapper[4810]: I0219 15:52:28.219176 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" event={"ID":"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1","Type":"ContainerDied","Data":"2e87240d488a9e3fc53ca295a40f539e65e717cb89ae7e5a0bdf479e1de61898"} Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.781100 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.928315 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-2\") pod \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.928857 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-combined-ca-bundle\") pod \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.929055 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x224k\" (UniqueName: \"kubernetes.io/projected/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-kube-api-access-x224k\") pod \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.929525 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-extra-config-0\") pod \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.929672 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-migration-ssh-key-1\") pod \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.929799 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-0\") pod \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.929910 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-inventory\") pod \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.930081 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-3\") pod \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.930220 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-ssh-key-openstack-edpm-ipam\") pod \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.930739 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-1\") pod \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.930902 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-migration-ssh-key-0\") pod \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\" (UID: \"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1\") " Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.936272 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-kube-api-access-x224k" (OuterVolumeSpecName: "kube-api-access-x224k") pod "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" (UID: "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1"). InnerVolumeSpecName "kube-api-access-x224k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.937731 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" (UID: "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.963613 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" (UID: "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.965494 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" (UID: "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.966941 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" (UID: "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.968993 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" (UID: "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.972525 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-inventory" (OuterVolumeSpecName: "inventory") pod "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" (UID: "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.973575 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" (UID: "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.981593 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" (UID: "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.984486 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" (UID: "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:52:29 crc kubenswrapper[4810]: I0219 15:52:29.998569 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" (UID: "cc5014f8-e5aa-47ad-8787-c187b0f7f0e1"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.038768 4810 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.038810 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.038823 4810 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.038837 4810 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.038849 4810 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.038860 4810 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.038872 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x224k\" (UniqueName: \"kubernetes.io/projected/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-kube-api-access-x224k\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.038884 4810 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.038895 4810 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.038908 4810 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.038920 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc5014f8-e5aa-47ad-8787-c187b0f7f0e1-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.247001 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" event={"ID":"cc5014f8-e5aa-47ad-8787-c187b0f7f0e1","Type":"ContainerDied","Data":"ba6d44e109e65ba6e824f2d3f6c56a0eb16fcbd5ac7ce5cace72d39335df678a"} Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.247044 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba6d44e109e65ba6e824f2d3f6c56a0eb16fcbd5ac7ce5cace72d39335df678a" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.247086 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nv8wh" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.383073 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc"] Feb 19 15:52:30 crc kubenswrapper[4810]: E0219 15:52:30.383517 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" containerName="extract-content" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.383538 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" containerName="extract-content" Feb 19 15:52:30 crc kubenswrapper[4810]: E0219 15:52:30.383555 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.383564 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 15:52:30 crc kubenswrapper[4810]: E0219 15:52:30.383579 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" containerName="registry-server" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.383587 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" containerName="registry-server" Feb 19 15:52:30 crc kubenswrapper[4810]: E0219 15:52:30.383608 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505f9989-9548-4391-b758-33ef9484f145" containerName="extract-content" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.383633 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="505f9989-9548-4391-b758-33ef9484f145" containerName="extract-content" Feb 19 15:52:30 crc kubenswrapper[4810]: E0219 15:52:30.383656 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505f9989-9548-4391-b758-33ef9484f145" containerName="registry-server" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.383663 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="505f9989-9548-4391-b758-33ef9484f145" containerName="registry-server" Feb 19 15:52:30 crc kubenswrapper[4810]: E0219 15:52:30.383677 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505f9989-9548-4391-b758-33ef9484f145" containerName="extract-utilities" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.383685 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="505f9989-9548-4391-b758-33ef9484f145" containerName="extract-utilities" Feb 19 15:52:30 crc kubenswrapper[4810]: E0219 15:52:30.383696 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" containerName="extract-utilities" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.383704 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" containerName="extract-utilities" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.383933 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b7ad7c-cc02-4a75-8ac7-47efd18067b9" containerName="registry-server" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.383954 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="505f9989-9548-4391-b758-33ef9484f145" containerName="registry-server" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.383974 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5014f8-e5aa-47ad-8787-c187b0f7f0e1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.384720 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.388037 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.388408 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-djc4q" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.388492 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.388693 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.388845 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.401027 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc"] Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.446818 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.446896 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.446947 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldvxh\" (UniqueName: \"kubernetes.io/projected/f7ca8c9a-db61-400f-9319-21590462f929-kube-api-access-ldvxh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.447003 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.447024 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.447058 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.447120 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.549126 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldvxh\" (UniqueName: \"kubernetes.io/projected/f7ca8c9a-db61-400f-9319-21590462f929-kube-api-access-ldvxh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.549266 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.549362 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.549425 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.549528 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.549647 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.549759 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.554406 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.554918 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.555258 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.555720 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.556140 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.558259 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.567036 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldvxh\" (UniqueName: \"kubernetes.io/projected/f7ca8c9a-db61-400f-9319-21590462f929-kube-api-access-ldvxh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:30 crc kubenswrapper[4810]: I0219 15:52:30.720203 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:52:31 crc kubenswrapper[4810]: I0219 15:52:31.294518 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc"] Feb 19 15:52:31 crc kubenswrapper[4810]: W0219 15:52:31.300413 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7ca8c9a_db61_400f_9319_21590462f929.slice/crio-20a3b9e0f65be3023f64999fad259ea78baedb786dcbbe39432440197d3ff6aa WatchSource:0}: Error finding container 20a3b9e0f65be3023f64999fad259ea78baedb786dcbbe39432440197d3ff6aa: Status 404 returned error can't find the container with id 20a3b9e0f65be3023f64999fad259ea78baedb786dcbbe39432440197d3ff6aa Feb 19 15:52:31 crc kubenswrapper[4810]: I0219 15:52:31.818362 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 15:52:32 crc kubenswrapper[4810]: I0219 15:52:32.267787 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" event={"ID":"f7ca8c9a-db61-400f-9319-21590462f929","Type":"ContainerStarted","Data":"d0eed7b2d2efeb2bed27dcf249640e7dd353ed994126d4417aa3f9b188221638"} Feb 19 15:52:32 crc kubenswrapper[4810]: I0219 15:52:32.268193 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" event={"ID":"f7ca8c9a-db61-400f-9319-21590462f929","Type":"ContainerStarted","Data":"20a3b9e0f65be3023f64999fad259ea78baedb786dcbbe39432440197d3ff6aa"} Feb 19 15:52:32 crc kubenswrapper[4810]: I0219 15:52:32.296684 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" podStartSLOduration=1.784789517 podStartE2EDuration="2.296666748s" podCreationTimestamp="2026-02-19 15:52:30 +0000 UTC" firstStartedPulling="2026-02-19 15:52:31.303411006 +0000 UTC m=+2580.785441140" lastFinishedPulling="2026-02-19 15:52:31.815288207 +0000 UTC m=+2581.297318371" observedRunningTime="2026-02-19 15:52:32.289434098 +0000 UTC m=+2581.771464212" watchObservedRunningTime="2026-02-19 15:52:32.296666748 +0000 UTC m=+2581.778696872" Feb 19 15:53:30 crc kubenswrapper[4810]: I0219 15:53:30.365151 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j8tzn"] Feb 19 15:53:30 crc kubenswrapper[4810]: I0219 15:53:30.374621 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:30 crc kubenswrapper[4810]: I0219 15:53:30.405522 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8tzn"] Feb 19 15:53:30 crc kubenswrapper[4810]: I0219 15:53:30.460581 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af674839-9e12-4569-bee5-abcf06b09ee4-catalog-content\") pod \"redhat-marketplace-j8tzn\" (UID: \"af674839-9e12-4569-bee5-abcf06b09ee4\") " pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:30 crc kubenswrapper[4810]: I0219 15:53:30.460676 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fmk9\" (UniqueName: \"kubernetes.io/projected/af674839-9e12-4569-bee5-abcf06b09ee4-kube-api-access-2fmk9\") pod \"redhat-marketplace-j8tzn\" (UID: \"af674839-9e12-4569-bee5-abcf06b09ee4\") " pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:30 crc kubenswrapper[4810]: I0219 15:53:30.460719 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af674839-9e12-4569-bee5-abcf06b09ee4-utilities\") pod \"redhat-marketplace-j8tzn\" (UID: \"af674839-9e12-4569-bee5-abcf06b09ee4\") " pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:30 crc kubenswrapper[4810]: I0219 15:53:30.563369 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af674839-9e12-4569-bee5-abcf06b09ee4-utilities\") pod \"redhat-marketplace-j8tzn\" (UID: \"af674839-9e12-4569-bee5-abcf06b09ee4\") " pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:30 crc kubenswrapper[4810]: I0219 15:53:30.563628 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af674839-9e12-4569-bee5-abcf06b09ee4-catalog-content\") pod \"redhat-marketplace-j8tzn\" (UID: \"af674839-9e12-4569-bee5-abcf06b09ee4\") " pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:30 crc kubenswrapper[4810]: I0219 15:53:30.563687 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fmk9\" (UniqueName: \"kubernetes.io/projected/af674839-9e12-4569-bee5-abcf06b09ee4-kube-api-access-2fmk9\") pod \"redhat-marketplace-j8tzn\" (UID: \"af674839-9e12-4569-bee5-abcf06b09ee4\") " pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:30 crc kubenswrapper[4810]: I0219 15:53:30.564459 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af674839-9e12-4569-bee5-abcf06b09ee4-utilities\") pod \"redhat-marketplace-j8tzn\" (UID: \"af674839-9e12-4569-bee5-abcf06b09ee4\") " pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:30 crc kubenswrapper[4810]: I0219 15:53:30.564471 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af674839-9e12-4569-bee5-abcf06b09ee4-catalog-content\") pod \"redhat-marketplace-j8tzn\" (UID: \"af674839-9e12-4569-bee5-abcf06b09ee4\") " pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:30 crc kubenswrapper[4810]: I0219 15:53:30.587967 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fmk9\" (UniqueName: \"kubernetes.io/projected/af674839-9e12-4569-bee5-abcf06b09ee4-kube-api-access-2fmk9\") pod \"redhat-marketplace-j8tzn\" (UID: \"af674839-9e12-4569-bee5-abcf06b09ee4\") " pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:30 crc kubenswrapper[4810]: I0219 15:53:30.719938 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:31 crc kubenswrapper[4810]: I0219 15:53:31.257215 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8tzn"] Feb 19 15:53:32 crc kubenswrapper[4810]: I0219 15:53:32.015848 4810 generic.go:334] "Generic (PLEG): container finished" podID="af674839-9e12-4569-bee5-abcf06b09ee4" containerID="896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478" exitCode=0 Feb 19 15:53:32 crc kubenswrapper[4810]: I0219 15:53:32.016087 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8tzn" event={"ID":"af674839-9e12-4569-bee5-abcf06b09ee4","Type":"ContainerDied","Data":"896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478"} Feb 19 15:53:32 crc kubenswrapper[4810]: I0219 15:53:32.020111 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8tzn" event={"ID":"af674839-9e12-4569-bee5-abcf06b09ee4","Type":"ContainerStarted","Data":"2dca8097edd5d82f15db95e6d4de2a250e4adea34e9887f2e668fad9bb9de061"} Feb 19 15:53:33 crc kubenswrapper[4810]: I0219 15:53:33.029627 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8tzn" event={"ID":"af674839-9e12-4569-bee5-abcf06b09ee4","Type":"ContainerStarted","Data":"f336b0a144f66301814adca72bf53897501afb1e619a9505a15772da7d9fb52e"} Feb 19 15:53:34 crc kubenswrapper[4810]: I0219 15:53:34.045390 4810 generic.go:334] "Generic (PLEG): container finished" podID="af674839-9e12-4569-bee5-abcf06b09ee4" containerID="f336b0a144f66301814adca72bf53897501afb1e619a9505a15772da7d9fb52e" exitCode=0 Feb 19 15:53:34 crc kubenswrapper[4810]: I0219 15:53:34.045440 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8tzn" event={"ID":"af674839-9e12-4569-bee5-abcf06b09ee4","Type":"ContainerDied","Data":"f336b0a144f66301814adca72bf53897501afb1e619a9505a15772da7d9fb52e"} Feb 19 15:53:35 crc kubenswrapper[4810]: I0219 15:53:35.057881 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8tzn" event={"ID":"af674839-9e12-4569-bee5-abcf06b09ee4","Type":"ContainerStarted","Data":"7497e6376bfd2621cb0d8554208fdc97c2fee88ae0127ebe48b955d5a65f7526"} Feb 19 15:53:35 crc kubenswrapper[4810]: I0219 15:53:35.082481 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j8tzn" podStartSLOduration=2.651895813 podStartE2EDuration="5.082463968s" podCreationTimestamp="2026-02-19 15:53:30 +0000 UTC" firstStartedPulling="2026-02-19 15:53:32.018768714 +0000 UTC m=+2641.500798848" lastFinishedPulling="2026-02-19 15:53:34.449336869 +0000 UTC m=+2643.931367003" observedRunningTime="2026-02-19 15:53:35.080023737 +0000 UTC m=+2644.562053871" watchObservedRunningTime="2026-02-19 15:53:35.082463968 +0000 UTC m=+2644.564494102" Feb 19 15:53:36 crc kubenswrapper[4810]: E0219 15:53:36.515265 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf674839_9e12_4569_bee5_abcf06b09ee4.slice/crio-conmon-896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf674839_9e12_4569_bee5_abcf06b09ee4.slice/crio-896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478.scope\": RecentStats: unable to find data in memory cache]" Feb 19 15:53:40 crc kubenswrapper[4810]: I0219 15:53:40.720474 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:40 crc kubenswrapper[4810]: I0219 15:53:40.721455 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:40 crc kubenswrapper[4810]: I0219 15:53:40.796961 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:41 crc kubenswrapper[4810]: I0219 15:53:41.195582 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:41 crc kubenswrapper[4810]: I0219 15:53:41.253381 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8tzn"] Feb 19 15:53:43 crc kubenswrapper[4810]: I0219 15:53:43.148000 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j8tzn" podUID="af674839-9e12-4569-bee5-abcf06b09ee4" containerName="registry-server" containerID="cri-o://7497e6376bfd2621cb0d8554208fdc97c2fee88ae0127ebe48b955d5a65f7526" gracePeriod=2 Feb 19 15:53:43 crc kubenswrapper[4810]: I0219 15:53:43.749163 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:43 crc kubenswrapper[4810]: I0219 15:53:43.787544 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fmk9\" (UniqueName: \"kubernetes.io/projected/af674839-9e12-4569-bee5-abcf06b09ee4-kube-api-access-2fmk9\") pod \"af674839-9e12-4569-bee5-abcf06b09ee4\" (UID: \"af674839-9e12-4569-bee5-abcf06b09ee4\") " Feb 19 15:53:43 crc kubenswrapper[4810]: I0219 15:53:43.787906 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af674839-9e12-4569-bee5-abcf06b09ee4-utilities\") pod \"af674839-9e12-4569-bee5-abcf06b09ee4\" (UID: \"af674839-9e12-4569-bee5-abcf06b09ee4\") " Feb 19 15:53:43 crc kubenswrapper[4810]: I0219 15:53:43.788182 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af674839-9e12-4569-bee5-abcf06b09ee4-catalog-content\") pod \"af674839-9e12-4569-bee5-abcf06b09ee4\" (UID: \"af674839-9e12-4569-bee5-abcf06b09ee4\") " Feb 19 15:53:43 crc kubenswrapper[4810]: I0219 15:53:43.789065 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af674839-9e12-4569-bee5-abcf06b09ee4-utilities" (OuterVolumeSpecName: "utilities") pod "af674839-9e12-4569-bee5-abcf06b09ee4" (UID: "af674839-9e12-4569-bee5-abcf06b09ee4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:53:43 crc kubenswrapper[4810]: I0219 15:53:43.801872 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af674839-9e12-4569-bee5-abcf06b09ee4-kube-api-access-2fmk9" (OuterVolumeSpecName: "kube-api-access-2fmk9") pod "af674839-9e12-4569-bee5-abcf06b09ee4" (UID: "af674839-9e12-4569-bee5-abcf06b09ee4"). InnerVolumeSpecName "kube-api-access-2fmk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:53:43 crc kubenswrapper[4810]: I0219 15:53:43.841260 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af674839-9e12-4569-bee5-abcf06b09ee4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af674839-9e12-4569-bee5-abcf06b09ee4" (UID: "af674839-9e12-4569-bee5-abcf06b09ee4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:53:43 crc kubenswrapper[4810]: I0219 15:53:43.891390 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af674839-9e12-4569-bee5-abcf06b09ee4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 15:53:43 crc kubenswrapper[4810]: I0219 15:53:43.891680 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fmk9\" (UniqueName: \"kubernetes.io/projected/af674839-9e12-4569-bee5-abcf06b09ee4-kube-api-access-2fmk9\") on node \"crc\" DevicePath \"\"" Feb 19 15:53:43 crc kubenswrapper[4810]: I0219 15:53:43.891766 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af674839-9e12-4569-bee5-abcf06b09ee4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.158592 4810 generic.go:334] "Generic (PLEG): container finished" podID="af674839-9e12-4569-bee5-abcf06b09ee4" containerID="7497e6376bfd2621cb0d8554208fdc97c2fee88ae0127ebe48b955d5a65f7526" exitCode=0 Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.158654 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8tzn" Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.158654 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8tzn" event={"ID":"af674839-9e12-4569-bee5-abcf06b09ee4","Type":"ContainerDied","Data":"7497e6376bfd2621cb0d8554208fdc97c2fee88ae0127ebe48b955d5a65f7526"} Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.158835 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8tzn" event={"ID":"af674839-9e12-4569-bee5-abcf06b09ee4","Type":"ContainerDied","Data":"2dca8097edd5d82f15db95e6d4de2a250e4adea34e9887f2e668fad9bb9de061"} Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.158869 4810 scope.go:117] "RemoveContainer" containerID="7497e6376bfd2621cb0d8554208fdc97c2fee88ae0127ebe48b955d5a65f7526" Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.180812 4810 scope.go:117] "RemoveContainer" containerID="f336b0a144f66301814adca72bf53897501afb1e619a9505a15772da7d9fb52e" Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.199454 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8tzn"] Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.218132 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8tzn"] Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.220727 4810 scope.go:117] "RemoveContainer" containerID="896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478" Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.271766 4810 scope.go:117] "RemoveContainer" containerID="7497e6376bfd2621cb0d8554208fdc97c2fee88ae0127ebe48b955d5a65f7526" Feb 19 15:53:44 crc kubenswrapper[4810]: E0219 15:53:44.272726 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7497e6376bfd2621cb0d8554208fdc97c2fee88ae0127ebe48b955d5a65f7526\": container with ID starting with 7497e6376bfd2621cb0d8554208fdc97c2fee88ae0127ebe48b955d5a65f7526 not found: ID does not exist" containerID="7497e6376bfd2621cb0d8554208fdc97c2fee88ae0127ebe48b955d5a65f7526" Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.272779 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7497e6376bfd2621cb0d8554208fdc97c2fee88ae0127ebe48b955d5a65f7526"} err="failed to get container status \"7497e6376bfd2621cb0d8554208fdc97c2fee88ae0127ebe48b955d5a65f7526\": rpc error: code = NotFound desc = could not find container \"7497e6376bfd2621cb0d8554208fdc97c2fee88ae0127ebe48b955d5a65f7526\": container with ID starting with 7497e6376bfd2621cb0d8554208fdc97c2fee88ae0127ebe48b955d5a65f7526 not found: ID does not exist" Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.272816 4810 scope.go:117] "RemoveContainer" containerID="f336b0a144f66301814adca72bf53897501afb1e619a9505a15772da7d9fb52e" Feb 19 15:53:44 crc kubenswrapper[4810]: E0219 15:53:44.273093 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f336b0a144f66301814adca72bf53897501afb1e619a9505a15772da7d9fb52e\": container with ID starting with f336b0a144f66301814adca72bf53897501afb1e619a9505a15772da7d9fb52e not found: ID does not exist" containerID="f336b0a144f66301814adca72bf53897501afb1e619a9505a15772da7d9fb52e" Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.273133 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f336b0a144f66301814adca72bf53897501afb1e619a9505a15772da7d9fb52e"} err="failed to get container status \"f336b0a144f66301814adca72bf53897501afb1e619a9505a15772da7d9fb52e\": rpc error: code = NotFound desc = could not find container \"f336b0a144f66301814adca72bf53897501afb1e619a9505a15772da7d9fb52e\": container with ID starting with f336b0a144f66301814adca72bf53897501afb1e619a9505a15772da7d9fb52e not found: ID does not exist" Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.273176 4810 scope.go:117] "RemoveContainer" containerID="896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478" Feb 19 15:53:44 crc kubenswrapper[4810]: E0219 15:53:44.273842 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478\": container with ID starting with 896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478 not found: ID does not exist" containerID="896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478" Feb 19 15:53:44 crc kubenswrapper[4810]: I0219 15:53:44.273886 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478"} err="failed to get container status \"896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478\": rpc error: code = NotFound desc = could not find container \"896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478\": container with ID starting with 896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478 not found: ID does not exist" Feb 19 15:53:45 crc kubenswrapper[4810]: I0219 15:53:45.453884 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af674839-9e12-4569-bee5-abcf06b09ee4" path="/var/lib/kubelet/pods/af674839-9e12-4569-bee5-abcf06b09ee4/volumes" Feb 19 15:53:46 crc kubenswrapper[4810]: E0219 15:53:46.865974 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf674839_9e12_4569_bee5_abcf06b09ee4.slice/crio-conmon-896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf674839_9e12_4569_bee5_abcf06b09ee4.slice/crio-896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478.scope\": RecentStats: unable to find data in memory cache]" Feb 19 15:53:57 crc kubenswrapper[4810]: E0219 15:53:57.142929 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf674839_9e12_4569_bee5_abcf06b09ee4.slice/crio-conmon-896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf674839_9e12_4569_bee5_abcf06b09ee4.slice/crio-896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478.scope\": RecentStats: unable to find data in memory cache]" Feb 19 15:54:07 crc kubenswrapper[4810]: E0219 15:54:07.459212 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf674839_9e12_4569_bee5_abcf06b09ee4.slice/crio-896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf674839_9e12_4569_bee5_abcf06b09ee4.slice/crio-conmon-896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478.scope\": RecentStats: unable to find data in memory cache]" Feb 19 15:54:17 crc kubenswrapper[4810]: E0219 15:54:17.753708 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf674839_9e12_4569_bee5_abcf06b09ee4.slice/crio-896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf674839_9e12_4569_bee5_abcf06b09ee4.slice/crio-conmon-896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478.scope\": RecentStats: unable to find data in memory cache]" Feb 19 15:54:28 crc kubenswrapper[4810]: E0219 15:54:28.082642 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf674839_9e12_4569_bee5_abcf06b09ee4.slice/crio-896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf674839_9e12_4569_bee5_abcf06b09ee4.slice/crio-conmon-896186bd577c6d6da0fe0df76aea6a2e25eecc44dada34f1358dac166c3d7478.scope\": RecentStats: unable to find data in memory cache]" Feb 19 15:54:31 crc kubenswrapper[4810]: E0219 15:54:31.470995 4810 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6e750a3a26584da6ec5e11eb1f18ab5b6d020df87b3986858e10a7310cc6e786/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6e750a3a26584da6ec5e11eb1f18ab5b6d020df87b3986858e10a7310cc6e786/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openshift-marketplace_redhat-marketplace-j8tzn_af674839-9e12-4569-bee5-abcf06b09ee4/extract-utilities/0.log" to get inode usage: stat /var/log/pods/openshift-marketplace_redhat-marketplace-j8tzn_af674839-9e12-4569-bee5-abcf06b09ee4/extract-utilities/0.log: no such file or directory Feb 19 15:54:36 crc kubenswrapper[4810]: I0219 15:54:36.757898 4810 generic.go:334] "Generic (PLEG): container finished" podID="f7ca8c9a-db61-400f-9319-21590462f929" containerID="d0eed7b2d2efeb2bed27dcf249640e7dd353ed994126d4417aa3f9b188221638" exitCode=0 Feb 19 15:54:36 crc kubenswrapper[4810]: I0219 15:54:36.758536 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" event={"ID":"f7ca8c9a-db61-400f-9319-21590462f929","Type":"ContainerDied","Data":"d0eed7b2d2efeb2bed27dcf249640e7dd353ed994126d4417aa3f9b188221638"} Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.323432 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.417143 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-telemetry-combined-ca-bundle\") pod \"f7ca8c9a-db61-400f-9319-21590462f929\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.417213 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-1\") pod \"f7ca8c9a-db61-400f-9319-21590462f929\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.417443 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-0\") pod \"f7ca8c9a-db61-400f-9319-21590462f929\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.417483 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-inventory\") pod \"f7ca8c9a-db61-400f-9319-21590462f929\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.417544 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ssh-key-openstack-edpm-ipam\") pod \"f7ca8c9a-db61-400f-9319-21590462f929\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.417656 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldvxh\" (UniqueName: \"kubernetes.io/projected/f7ca8c9a-db61-400f-9319-21590462f929-kube-api-access-ldvxh\") pod \"f7ca8c9a-db61-400f-9319-21590462f929\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.417740 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-2\") pod \"f7ca8c9a-db61-400f-9319-21590462f929\" (UID: \"f7ca8c9a-db61-400f-9319-21590462f929\") " Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.424199 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f7ca8c9a-db61-400f-9319-21590462f929" (UID: "f7ca8c9a-db61-400f-9319-21590462f929"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.424637 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7ca8c9a-db61-400f-9319-21590462f929-kube-api-access-ldvxh" (OuterVolumeSpecName: "kube-api-access-ldvxh") pod "f7ca8c9a-db61-400f-9319-21590462f929" (UID: "f7ca8c9a-db61-400f-9319-21590462f929"). InnerVolumeSpecName "kube-api-access-ldvxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.447072 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "f7ca8c9a-db61-400f-9319-21590462f929" (UID: "f7ca8c9a-db61-400f-9319-21590462f929"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.449121 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "f7ca8c9a-db61-400f-9319-21590462f929" (UID: "f7ca8c9a-db61-400f-9319-21590462f929"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.450242 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f7ca8c9a-db61-400f-9319-21590462f929" (UID: "f7ca8c9a-db61-400f-9319-21590462f929"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.454046 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "f7ca8c9a-db61-400f-9319-21590462f929" (UID: "f7ca8c9a-db61-400f-9319-21590462f929"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.467411 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-inventory" (OuterVolumeSpecName: "inventory") pod "f7ca8c9a-db61-400f-9319-21590462f929" (UID: "f7ca8c9a-db61-400f-9319-21590462f929"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.521044 4810 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.521096 4810 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.521117 4810 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.521138 4810 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.521159 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.521179 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldvxh\" (UniqueName: \"kubernetes.io/projected/f7ca8c9a-db61-400f-9319-21590462f929-kube-api-access-ldvxh\") on node \"crc\" DevicePath \"\"" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.521196 4810 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f7ca8c9a-db61-400f-9319-21590462f929-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.802293 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" event={"ID":"f7ca8c9a-db61-400f-9319-21590462f929","Type":"ContainerDied","Data":"20a3b9e0f65be3023f64999fad259ea78baedb786dcbbe39432440197d3ff6aa"} Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.802765 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20a3b9e0f65be3023f64999fad259ea78baedb786dcbbe39432440197d3ff6aa" Feb 19 15:54:38 crc kubenswrapper[4810]: I0219 15:54:38.802979 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc" Feb 19 15:54:49 crc kubenswrapper[4810]: I0219 15:54:49.538112 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:54:49 crc kubenswrapper[4810]: I0219 15:54:49.538864 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.331602 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 19 15:55:12 crc kubenswrapper[4810]: E0219 15:55:12.332430 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af674839-9e12-4569-bee5-abcf06b09ee4" containerName="registry-server" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.332442 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="af674839-9e12-4569-bee5-abcf06b09ee4" containerName="registry-server" Feb 19 15:55:12 crc kubenswrapper[4810]: E0219 15:55:12.332458 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af674839-9e12-4569-bee5-abcf06b09ee4" containerName="extract-utilities" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.332465 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="af674839-9e12-4569-bee5-abcf06b09ee4" containerName="extract-utilities" Feb 19 15:55:12 crc kubenswrapper[4810]: E0219 15:55:12.332483 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ca8c9a-db61-400f-9319-21590462f929" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.332490 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ca8c9a-db61-400f-9319-21590462f929" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 15:55:12 crc kubenswrapper[4810]: E0219 15:55:12.332504 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af674839-9e12-4569-bee5-abcf06b09ee4" containerName="extract-content" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.332510 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="af674839-9e12-4569-bee5-abcf06b09ee4" containerName="extract-content" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.332682 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7ca8c9a-db61-400f-9319-21590462f929" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.332697 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="af674839-9e12-4569-bee5-abcf06b09ee4" containerName="registry-server" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.333616 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.340458 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.344721 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.351536 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.351572 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-lib-modules\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.351592 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5cqj\" (UniqueName: \"kubernetes.io/projected/f66b86b2-b164-4380-8a89-bb0cf5f833ef-kube-api-access-z5cqj\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.351753 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-run\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.351794 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-sys\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.351821 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-etc-nvme\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.351870 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66b86b2-b164-4380-8a89-bb0cf5f833ef-config-data-custom\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.351901 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66b86b2-b164-4380-8a89-bb0cf5f833ef-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.352014 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66b86b2-b164-4380-8a89-bb0cf5f833ef-config-data\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.352148 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.352256 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.352285 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-dev\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.352374 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66b86b2-b164-4380-8a89-bb0cf5f833ef-scripts\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.352398 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.352468 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.422457 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.425642 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.427833 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.440827 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454394 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66b86b2-b164-4380-8a89-bb0cf5f833ef-config-data-custom\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454433 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66b86b2-b164-4380-8a89-bb0cf5f833ef-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454466 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66b86b2-b164-4380-8a89-bb0cf5f833ef-config-data\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454533 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454582 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454598 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-dev\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454622 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66b86b2-b164-4380-8a89-bb0cf5f833ef-scripts\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454636 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454671 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454720 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454749 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-lib-modules\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454766 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5cqj\" (UniqueName: \"kubernetes.io/projected/f66b86b2-b164-4380-8a89-bb0cf5f833ef-kube-api-access-z5cqj\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454808 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-run\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454823 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-sys\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454840 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-etc-nvme\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.454960 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-etc-nvme\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.455242 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-lib-modules\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.455390 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-run\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.455429 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-sys\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.455442 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.455480 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.455661 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-dev\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.459531 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.460101 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.460142 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f66b86b2-b164-4380-8a89-bb0cf5f833ef-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.461200 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66b86b2-b164-4380-8a89-bb0cf5f833ef-config-data\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.476340 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66b86b2-b164-4380-8a89-bb0cf5f833ef-config-data-custom\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.476821 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66b86b2-b164-4380-8a89-bb0cf5f833ef-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.478511 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.480771 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66b86b2-b164-4380-8a89-bb0cf5f833ef-scripts\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.489771 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.489771 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5cqj\" (UniqueName: \"kubernetes.io/projected/f66b86b2-b164-4380-8a89-bb0cf5f833ef-kube-api-access-z5cqj\") pod \"cinder-backup-0\" (UID: \"f66b86b2-b164-4380-8a89-bb0cf5f833ef\") " pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.492120 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.520165 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.556200 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a46eb8-508d-45be-bf13-31aed23d1582-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.556248 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.556290 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.556583 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.556663 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-run\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.556696 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.556727 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.556874 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20a46eb8-508d-45be-bf13-31aed23d1582-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.556963 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.557000 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20a46eb8-508d-45be-bf13-31aed23d1582-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.557138 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20a46eb8-508d-45be-bf13-31aed23d1582-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.557175 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-sys\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.557300 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-dev\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.557355 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.557389 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ms54\" (UniqueName: \"kubernetes.io/projected/20a46eb8-508d-45be-bf13-31aed23d1582-kube-api-access-8ms54\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.654703 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.669876 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.669965 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.670013 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.670048 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.670079 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-run\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.670109 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.670143 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.670165 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.670198 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a12495-8d82-4296-9328-430af6d923b2-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.670225 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20a46eb8-508d-45be-bf13-31aed23d1582-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.670270 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.670298 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20a46eb8-508d-45be-bf13-31aed23d1582-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.670385 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74a12495-8d82-4296-9328-430af6d923b2-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.674496 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-run\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.674592 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.674641 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.674705 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.674903 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.675588 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20a46eb8-508d-45be-bf13-31aed23d1582-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.675696 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-sys\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.675790 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.675884 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.676046 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-dev\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.676142 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.676247 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ms54\" (UniqueName: \"kubernetes.io/projected/20a46eb8-508d-45be-bf13-31aed23d1582-kube-api-access-8ms54\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.676397 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a12495-8d82-4296-9328-430af6d923b2-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.676531 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.676672 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.676835 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.676910 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.676971 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a46eb8-508d-45be-bf13-31aed23d1582-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.677056 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.677140 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.677149 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.677187 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9h55\" (UniqueName: \"kubernetes.io/projected/74a12495-8d82-4296-9328-430af6d923b2-kube-api-access-l9h55\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.678371 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-sys\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.678727 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.678758 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.678798 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/20a46eb8-508d-45be-bf13-31aed23d1582-dev\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.678982 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a12495-8d82-4296-9328-430af6d923b2-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.689610 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20a46eb8-508d-45be-bf13-31aed23d1582-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.699235 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a46eb8-508d-45be-bf13-31aed23d1582-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.702506 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20a46eb8-508d-45be-bf13-31aed23d1582-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.706061 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20a46eb8-508d-45be-bf13-31aed23d1582-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.721023 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ms54\" (UniqueName: \"kubernetes.io/projected/20a46eb8-508d-45be-bf13-31aed23d1582-kube-api-access-8ms54\") pod \"cinder-volume-nfs-0\" (UID: \"20a46eb8-508d-45be-bf13-31aed23d1582\") " pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.761931 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.782965 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783012 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783036 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783059 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783078 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a12495-8d82-4296-9328-430af6d923b2-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783114 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74a12495-8d82-4296-9328-430af6d923b2-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783139 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783153 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783183 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a12495-8d82-4296-9328-430af6d923b2-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783200 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783223 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783250 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783269 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783310 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9h55\" (UniqueName: \"kubernetes.io/projected/74a12495-8d82-4296-9328-430af6d923b2-kube-api-access-l9h55\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783345 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a12495-8d82-4296-9328-430af6d923b2-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783868 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783945 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783971 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.783994 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.784017 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.785530 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.785577 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.785618 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.785733 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.786093 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/74a12495-8d82-4296-9328-430af6d923b2-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.789452 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a12495-8d82-4296-9328-430af6d923b2-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.790414 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a12495-8d82-4296-9328-430af6d923b2-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.796040 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a12495-8d82-4296-9328-430af6d923b2-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.800263 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74a12495-8d82-4296-9328-430af6d923b2-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.810647 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9h55\" (UniqueName: \"kubernetes.io/projected/74a12495-8d82-4296-9328-430af6d923b2-kube-api-access-l9h55\") pod \"cinder-volume-nfs-2-0\" (UID: \"74a12495-8d82-4296-9328-430af6d923b2\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:12 crc kubenswrapper[4810]: I0219 15:55:12.865385 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:13 crc kubenswrapper[4810]: I0219 15:55:13.231420 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 19 15:55:13 crc kubenswrapper[4810]: I0219 15:55:13.232193 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 15:55:13 crc kubenswrapper[4810]: I0219 15:55:13.247733 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"f66b86b2-b164-4380-8a89-bb0cf5f833ef","Type":"ContainerStarted","Data":"5e8f85fe54dd5e1ff99ce6a0b3b1973b9b7f40c59cdb226f912652060a92e50d"} Feb 19 15:55:13 crc kubenswrapper[4810]: I0219 15:55:13.357949 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 19 15:55:13 crc kubenswrapper[4810]: W0219 15:55:13.380568 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20a46eb8_508d_45be_bf13_31aed23d1582.slice/crio-caa41dbb4ab4a4e44b929f9e401457fad90d829340625a561aa3366a426ed26f WatchSource:0}: Error finding container caa41dbb4ab4a4e44b929f9e401457fad90d829340625a561aa3366a426ed26f: Status 404 returned error can't find the container with id caa41dbb4ab4a4e44b929f9e401457fad90d829340625a561aa3366a426ed26f Feb 19 15:55:13 crc kubenswrapper[4810]: I0219 15:55:13.454958 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 19 15:55:14 crc kubenswrapper[4810]: I0219 15:55:14.261288 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"20a46eb8-508d-45be-bf13-31aed23d1582","Type":"ContainerStarted","Data":"3f67b6a567911aefaeebcb4cb8f53b7df187985f6339bf75ae32887067f8caea"} Feb 19 15:55:14 crc kubenswrapper[4810]: I0219 15:55:14.261781 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"20a46eb8-508d-45be-bf13-31aed23d1582","Type":"ContainerStarted","Data":"6adc24350d3170c12c6b597c23afb4f51983344ab109c740e5de2016b94b66d0"} Feb 19 15:55:14 crc kubenswrapper[4810]: I0219 15:55:14.261793 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"20a46eb8-508d-45be-bf13-31aed23d1582","Type":"ContainerStarted","Data":"caa41dbb4ab4a4e44b929f9e401457fad90d829340625a561aa3366a426ed26f"} Feb 19 15:55:14 crc kubenswrapper[4810]: I0219 15:55:14.263461 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"f66b86b2-b164-4380-8a89-bb0cf5f833ef","Type":"ContainerStarted","Data":"fa7ec313ba6e9576a0b0fe26b9bd5c94fbc1c00f4438d71688b30f2a7f92e5b2"} Feb 19 15:55:14 crc kubenswrapper[4810]: I0219 15:55:14.263523 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"f66b86b2-b164-4380-8a89-bb0cf5f833ef","Type":"ContainerStarted","Data":"9efdda8c20157fd0a49e2aa9e42453f7ef8c5a0a98846e8b6c156c34e8c0e905"} Feb 19 15:55:14 crc kubenswrapper[4810]: I0219 15:55:14.265447 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"74a12495-8d82-4296-9328-430af6d923b2","Type":"ContainerStarted","Data":"ac5b7f2828f61565b87a1ad1cd03ce4f3eb142d433b910028fa40ad25f3cfa3f"} Feb 19 15:55:14 crc kubenswrapper[4810]: I0219 15:55:14.265485 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"74a12495-8d82-4296-9328-430af6d923b2","Type":"ContainerStarted","Data":"4d633a099a5f16eb87773769986770cdb28b6232f8d620e57d8768afa212c73e"} Feb 19 15:55:14 crc kubenswrapper[4810]: I0219 15:55:14.265495 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"74a12495-8d82-4296-9328-430af6d923b2","Type":"ContainerStarted","Data":"ed513194dd7a21859393ce2fccc3bf6bdf7ed97abb1119d451cf1462e1b04fa3"} Feb 19 15:55:14 crc kubenswrapper[4810]: I0219 15:55:14.285888 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.042332184 podStartE2EDuration="2.285873434s" podCreationTimestamp="2026-02-19 15:55:12 +0000 UTC" firstStartedPulling="2026-02-19 15:55:13.383477464 +0000 UTC m=+2742.865507588" lastFinishedPulling="2026-02-19 15:55:13.627018684 +0000 UTC m=+2743.109048838" observedRunningTime="2026-02-19 15:55:14.283731071 +0000 UTC m=+2743.765761195" watchObservedRunningTime="2026-02-19 15:55:14.285873434 +0000 UTC m=+2743.767903558" Feb 19 15:55:14 crc kubenswrapper[4810]: I0219 15:55:14.308794 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=2.231381246 podStartE2EDuration="2.308772485s" podCreationTimestamp="2026-02-19 15:55:12 +0000 UTC" firstStartedPulling="2026-02-19 15:55:13.556375173 +0000 UTC m=+2743.038405317" lastFinishedPulling="2026-02-19 15:55:13.633766432 +0000 UTC m=+2743.115796556" observedRunningTime="2026-02-19 15:55:14.301726729 +0000 UTC m=+2743.783756853" watchObservedRunningTime="2026-02-19 15:55:14.308772485 +0000 UTC m=+2743.790802609" Feb 19 15:55:17 crc kubenswrapper[4810]: I0219 15:55:17.655199 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 19 15:55:17 crc kubenswrapper[4810]: I0219 15:55:17.762509 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:17 crc kubenswrapper[4810]: I0219 15:55:17.866275 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:19 crc kubenswrapper[4810]: I0219 15:55:19.537344 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:55:19 crc kubenswrapper[4810]: I0219 15:55:19.537740 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:55:22 crc kubenswrapper[4810]: I0219 15:55:22.862557 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 19 15:55:22 crc kubenswrapper[4810]: I0219 15:55:22.896599 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=10.580401542 podStartE2EDuration="10.896580791s" podCreationTimestamp="2026-02-19 15:55:12 +0000 UTC" firstStartedPulling="2026-02-19 15:55:13.231975819 +0000 UTC m=+2742.714005943" lastFinishedPulling="2026-02-19 15:55:13.548155058 +0000 UTC m=+2743.030185192" observedRunningTime="2026-02-19 15:55:14.324516787 +0000 UTC m=+2743.806546931" watchObservedRunningTime="2026-02-19 15:55:22.896580791 +0000 UTC m=+2752.378610905" Feb 19 15:55:23 crc kubenswrapper[4810]: I0219 15:55:23.015478 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Feb 19 15:55:23 crc kubenswrapper[4810]: I0219 15:55:23.094167 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Feb 19 15:55:49 crc kubenswrapper[4810]: I0219 15:55:49.537232 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:55:49 crc kubenswrapper[4810]: I0219 15:55:49.537930 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:55:49 crc kubenswrapper[4810]: I0219 15:55:49.537982 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:55:49 crc kubenswrapper[4810]: I0219 15:55:49.538685 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6287fea84a36b36e84360e8242ff92d8a387529e6d7bde8d2a6410fbcb262896"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:55:49 crc kubenswrapper[4810]: I0219 15:55:49.538748 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://6287fea84a36b36e84360e8242ff92d8a387529e6d7bde8d2a6410fbcb262896" gracePeriod=600 Feb 19 15:55:49 crc kubenswrapper[4810]: I0219 15:55:49.878797 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="6287fea84a36b36e84360e8242ff92d8a387529e6d7bde8d2a6410fbcb262896" exitCode=0 Feb 19 15:55:49 crc kubenswrapper[4810]: I0219 15:55:49.878882 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"6287fea84a36b36e84360e8242ff92d8a387529e6d7bde8d2a6410fbcb262896"} Feb 19 15:55:49 crc kubenswrapper[4810]: I0219 15:55:49.879192 4810 scope.go:117] "RemoveContainer" containerID="75401b3edfa84b40903edbf5e5d73ff2da026a9f8214b6a2404ce809eb32bd1f" Feb 19 15:55:50 crc kubenswrapper[4810]: I0219 15:55:50.892910 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09"} Feb 19 15:56:43 crc kubenswrapper[4810]: I0219 15:56:43.156890 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:56:43 crc kubenswrapper[4810]: I0219 15:56:43.158077 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="prometheus" containerID="cri-o://09171322701d01cc2b65792ae9294bf2facdbb2647bec3473b5a2f2c7595769d" gracePeriod=600 Feb 19 15:56:43 crc kubenswrapper[4810]: I0219 15:56:43.158154 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="thanos-sidecar" containerID="cri-o://ab34c252f979f018755a4fd37fb43b24f2dc32b972b84fea55031898aade6962" gracePeriod=600 Feb 19 15:56:43 crc kubenswrapper[4810]: I0219 15:56:43.158201 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="config-reloader" containerID="cri-o://1cf6b1e26b593f87dbae05831261b5448c34285d54ffc513dcc21ce3c137d57f" gracePeriod=600 Feb 19 15:56:43 crc kubenswrapper[4810]: I0219 15:56:43.532312 4810 generic.go:334] "Generic (PLEG): container finished" podID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerID="ab34c252f979f018755a4fd37fb43b24f2dc32b972b84fea55031898aade6962" exitCode=0 Feb 19 15:56:43 crc kubenswrapper[4810]: I0219 15:56:43.532590 4810 generic.go:334] "Generic (PLEG): container finished" podID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerID="09171322701d01cc2b65792ae9294bf2facdbb2647bec3473b5a2f2c7595769d" exitCode=0 Feb 19 15:56:43 crc kubenswrapper[4810]: I0219 15:56:43.532353 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5c213a3a-78fd-4b42-bc1c-e09837eae684","Type":"ContainerDied","Data":"ab34c252f979f018755a4fd37fb43b24f2dc32b972b84fea55031898aade6962"} Feb 19 15:56:43 crc kubenswrapper[4810]: I0219 15:56:43.532620 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5c213a3a-78fd-4b42-bc1c-e09837eae684","Type":"ContainerDied","Data":"09171322701d01cc2b65792ae9294bf2facdbb2647bec3473b5a2f2c7595769d"} Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.240634 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.344703 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c213a3a-78fd-4b42-bc1c-e09837eae684-config-out\") pod \"5c213a3a-78fd-4b42-bc1c-e09837eae684\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.344783 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-2\") pod \"5c213a3a-78fd-4b42-bc1c-e09837eae684\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.344861 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"5c213a3a-78fd-4b42-bc1c-e09837eae684\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.344921 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-1\") pod \"5c213a3a-78fd-4b42-bc1c-e09837eae684\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.344944 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config\") pod \"5c213a3a-78fd-4b42-bc1c-e09837eae684\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.344994 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-0\") pod \"5c213a3a-78fd-4b42-bc1c-e09837eae684\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.345041 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c213a3a-78fd-4b42-bc1c-e09837eae684-tls-assets\") pod \"5c213a3a-78fd-4b42-bc1c-e09837eae684\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.345069 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmlm6\" (UniqueName: \"kubernetes.io/projected/5c213a3a-78fd-4b42-bc1c-e09837eae684-kube-api-access-zmlm6\") pod \"5c213a3a-78fd-4b42-bc1c-e09837eae684\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.345128 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-thanos-prometheus-http-client-file\") pod \"5c213a3a-78fd-4b42-bc1c-e09837eae684\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.345176 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"5c213a3a-78fd-4b42-bc1c-e09837eae684\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.345218 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-secret-combined-ca-bundle\") pod \"5c213a3a-78fd-4b42-bc1c-e09837eae684\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.345250 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-config\") pod \"5c213a3a-78fd-4b42-bc1c-e09837eae684\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.345283 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "5c213a3a-78fd-4b42-bc1c-e09837eae684" (UID: "5c213a3a-78fd-4b42-bc1c-e09837eae684"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.345410 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"5c213a3a-78fd-4b42-bc1c-e09837eae684\" (UID: \"5c213a3a-78fd-4b42-bc1c-e09837eae684\") " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.345838 4810 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.348859 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "5c213a3a-78fd-4b42-bc1c-e09837eae684" (UID: "5c213a3a-78fd-4b42-bc1c-e09837eae684"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.349261 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "5c213a3a-78fd-4b42-bc1c-e09837eae684" (UID: "5c213a3a-78fd-4b42-bc1c-e09837eae684"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.355431 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c213a3a-78fd-4b42-bc1c-e09837eae684-config-out" (OuterVolumeSpecName: "config-out") pod "5c213a3a-78fd-4b42-bc1c-e09837eae684" (UID: "5c213a3a-78fd-4b42-bc1c-e09837eae684"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.356197 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "5c213a3a-78fd-4b42-bc1c-e09837eae684" (UID: "5c213a3a-78fd-4b42-bc1c-e09837eae684"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.357503 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c213a3a-78fd-4b42-bc1c-e09837eae684-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "5c213a3a-78fd-4b42-bc1c-e09837eae684" (UID: "5c213a3a-78fd-4b42-bc1c-e09837eae684"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.358204 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "5c213a3a-78fd-4b42-bc1c-e09837eae684" (UID: "5c213a3a-78fd-4b42-bc1c-e09837eae684"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.358848 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "5c213a3a-78fd-4b42-bc1c-e09837eae684" (UID: "5c213a3a-78fd-4b42-bc1c-e09837eae684"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.361626 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c213a3a-78fd-4b42-bc1c-e09837eae684-kube-api-access-zmlm6" (OuterVolumeSpecName: "kube-api-access-zmlm6") pod "5c213a3a-78fd-4b42-bc1c-e09837eae684" (UID: "5c213a3a-78fd-4b42-bc1c-e09837eae684"). InnerVolumeSpecName "kube-api-access-zmlm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.368359 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "5c213a3a-78fd-4b42-bc1c-e09837eae684" (UID: "5c213a3a-78fd-4b42-bc1c-e09837eae684"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.369827 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-config" (OuterVolumeSpecName: "config") pod "5c213a3a-78fd-4b42-bc1c-e09837eae684" (UID: "5c213a3a-78fd-4b42-bc1c-e09837eae684"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.394749 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "5c213a3a-78fd-4b42-bc1c-e09837eae684" (UID: "5c213a3a-78fd-4b42-bc1c-e09837eae684"). InnerVolumeSpecName "pvc-b3b143f1-488b-49bf-8792-af0d760f341e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.441192 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config" (OuterVolumeSpecName: "web-config") pod "5c213a3a-78fd-4b42-bc1c-e09837eae684" (UID: "5c213a3a-78fd-4b42-bc1c-e09837eae684"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.448226 4810 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.448256 4810 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.448270 4810 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.448280 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.448315 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") on node \"crc\" " Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.448338 4810 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c213a3a-78fd-4b42-bc1c-e09837eae684-config-out\") on node \"crc\" DevicePath \"\"" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.448348 4810 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.448358 4810 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.448367 4810 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c213a3a-78fd-4b42-bc1c-e09837eae684-web-config\") on node \"crc\" DevicePath \"\"" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.448376 4810 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5c213a3a-78fd-4b42-bc1c-e09837eae684-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.448385 4810 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c213a3a-78fd-4b42-bc1c-e09837eae684-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.448395 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmlm6\" (UniqueName: \"kubernetes.io/projected/5c213a3a-78fd-4b42-bc1c-e09837eae684-kube-api-access-zmlm6\") on node \"crc\" DevicePath \"\"" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.477350 4810 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.477535 4810 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b3b143f1-488b-49bf-8792-af0d760f341e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e") on node "crc" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.543417 4810 generic.go:334] "Generic (PLEG): container finished" podID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerID="1cf6b1e26b593f87dbae05831261b5448c34285d54ffc513dcc21ce3c137d57f" exitCode=0 Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.543464 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5c213a3a-78fd-4b42-bc1c-e09837eae684","Type":"ContainerDied","Data":"1cf6b1e26b593f87dbae05831261b5448c34285d54ffc513dcc21ce3c137d57f"} Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.543493 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5c213a3a-78fd-4b42-bc1c-e09837eae684","Type":"ContainerDied","Data":"971051d9ba9c03f03793d16fa10f74ed254c90779c474f368dd54279d38af75a"} Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.543514 4810 scope.go:117] "RemoveContainer" containerID="ab34c252f979f018755a4fd37fb43b24f2dc32b972b84fea55031898aade6962" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.543649 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.550088 4810 reconciler_common.go:293] "Volume detached for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") on node \"crc\" DevicePath \"\"" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.584035 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.592181 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.615093 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:56:44 crc kubenswrapper[4810]: E0219 15:56:44.615476 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="prometheus" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.615493 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="prometheus" Feb 19 15:56:44 crc kubenswrapper[4810]: E0219 15:56:44.615503 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="thanos-sidecar" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.615511 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="thanos-sidecar" Feb 19 15:56:44 crc kubenswrapper[4810]: E0219 15:56:44.615529 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="init-config-reloader" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.615537 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="init-config-reloader" Feb 19 15:56:44 crc kubenswrapper[4810]: E0219 15:56:44.615557 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="config-reloader" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.615563 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="config-reloader" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.615761 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="prometheus" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.615776 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="thanos-sidecar" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.615791 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" containerName="config-reloader" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.617529 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.624738 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.629244 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.629344 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.629395 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.629476 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.629563 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.629614 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-x7hn6" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.632663 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.650943 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.675853 4810 scope.go:117] "RemoveContainer" containerID="1cf6b1e26b593f87dbae05831261b5448c34285d54ffc513dcc21ce3c137d57f" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.712831 4810 scope.go:117] "RemoveContainer" containerID="09171322701d01cc2b65792ae9294bf2facdbb2647bec3473b5a2f2c7595769d" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.735500 4810 scope.go:117] "RemoveContainer" containerID="f004bfd72f8d26f3aaf22a2c6561639f20d7d6d9b961893f534ded590f3c40ec" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.754389 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.754447 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.754501 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.754538 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bf65af35-1e80-49a0-ada2-3bd027193193-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.754555 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/bf65af35-1e80-49a0-ada2-3bd027193193-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.754590 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bf65af35-1e80-49a0-ada2-3bd027193193-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.754611 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.754627 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.754674 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-config\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.754695 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mkgk\" (UniqueName: \"kubernetes.io/projected/bf65af35-1e80-49a0-ada2-3bd027193193-kube-api-access-7mkgk\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.754711 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bf65af35-1e80-49a0-ada2-3bd027193193-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.754732 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/bf65af35-1e80-49a0-ada2-3bd027193193-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.754767 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.758128 4810 scope.go:117] "RemoveContainer" containerID="ab34c252f979f018755a4fd37fb43b24f2dc32b972b84fea55031898aade6962" Feb 19 15:56:44 crc kubenswrapper[4810]: E0219 15:56:44.762781 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab34c252f979f018755a4fd37fb43b24f2dc32b972b84fea55031898aade6962\": container with ID starting with ab34c252f979f018755a4fd37fb43b24f2dc32b972b84fea55031898aade6962 not found: ID does not exist" containerID="ab34c252f979f018755a4fd37fb43b24f2dc32b972b84fea55031898aade6962" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.762810 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab34c252f979f018755a4fd37fb43b24f2dc32b972b84fea55031898aade6962"} err="failed to get container status \"ab34c252f979f018755a4fd37fb43b24f2dc32b972b84fea55031898aade6962\": rpc error: code = NotFound desc = could not find container \"ab34c252f979f018755a4fd37fb43b24f2dc32b972b84fea55031898aade6962\": container with ID starting with ab34c252f979f018755a4fd37fb43b24f2dc32b972b84fea55031898aade6962 not found: ID does not exist" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.762830 4810 scope.go:117] "RemoveContainer" containerID="1cf6b1e26b593f87dbae05831261b5448c34285d54ffc513dcc21ce3c137d57f" Feb 19 15:56:44 crc kubenswrapper[4810]: E0219 15:56:44.763297 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf6b1e26b593f87dbae05831261b5448c34285d54ffc513dcc21ce3c137d57f\": container with ID starting with 1cf6b1e26b593f87dbae05831261b5448c34285d54ffc513dcc21ce3c137d57f not found: ID does not exist" containerID="1cf6b1e26b593f87dbae05831261b5448c34285d54ffc513dcc21ce3c137d57f" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.763357 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf6b1e26b593f87dbae05831261b5448c34285d54ffc513dcc21ce3c137d57f"} err="failed to get container status \"1cf6b1e26b593f87dbae05831261b5448c34285d54ffc513dcc21ce3c137d57f\": rpc error: code = NotFound desc = could not find container \"1cf6b1e26b593f87dbae05831261b5448c34285d54ffc513dcc21ce3c137d57f\": container with ID starting with 1cf6b1e26b593f87dbae05831261b5448c34285d54ffc513dcc21ce3c137d57f not found: ID does not exist" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.763385 4810 scope.go:117] "RemoveContainer" containerID="09171322701d01cc2b65792ae9294bf2facdbb2647bec3473b5a2f2c7595769d" Feb 19 15:56:44 crc kubenswrapper[4810]: E0219 15:56:44.764011 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09171322701d01cc2b65792ae9294bf2facdbb2647bec3473b5a2f2c7595769d\": container with ID starting with 09171322701d01cc2b65792ae9294bf2facdbb2647bec3473b5a2f2c7595769d not found: ID does not exist" containerID="09171322701d01cc2b65792ae9294bf2facdbb2647bec3473b5a2f2c7595769d" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.764043 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09171322701d01cc2b65792ae9294bf2facdbb2647bec3473b5a2f2c7595769d"} err="failed to get container status \"09171322701d01cc2b65792ae9294bf2facdbb2647bec3473b5a2f2c7595769d\": rpc error: code = NotFound desc = could not find container \"09171322701d01cc2b65792ae9294bf2facdbb2647bec3473b5a2f2c7595769d\": container with ID starting with 09171322701d01cc2b65792ae9294bf2facdbb2647bec3473b5a2f2c7595769d not found: ID does not exist" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.764059 4810 scope.go:117] "RemoveContainer" containerID="f004bfd72f8d26f3aaf22a2c6561639f20d7d6d9b961893f534ded590f3c40ec" Feb 19 15:56:44 crc kubenswrapper[4810]: E0219 15:56:44.767470 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f004bfd72f8d26f3aaf22a2c6561639f20d7d6d9b961893f534ded590f3c40ec\": container with ID starting with f004bfd72f8d26f3aaf22a2c6561639f20d7d6d9b961893f534ded590f3c40ec not found: ID does not exist" containerID="f004bfd72f8d26f3aaf22a2c6561639f20d7d6d9b961893f534ded590f3c40ec" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.767519 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f004bfd72f8d26f3aaf22a2c6561639f20d7d6d9b961893f534ded590f3c40ec"} err="failed to get container status \"f004bfd72f8d26f3aaf22a2c6561639f20d7d6d9b961893f534ded590f3c40ec\": rpc error: code = NotFound desc = could not find container \"f004bfd72f8d26f3aaf22a2c6561639f20d7d6d9b961893f534ded590f3c40ec\": container with ID starting with f004bfd72f8d26f3aaf22a2c6561639f20d7d6d9b961893f534ded590f3c40ec not found: ID does not exist" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.856241 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/bf65af35-1e80-49a0-ada2-3bd027193193-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.856522 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.856549 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.856576 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.856631 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.856670 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bf65af35-1e80-49a0-ada2-3bd027193193-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.856689 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/bf65af35-1e80-49a0-ada2-3bd027193193-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.856728 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bf65af35-1e80-49a0-ada2-3bd027193193-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.856750 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.856769 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.856821 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-config\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.856843 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mkgk\" (UniqueName: \"kubernetes.io/projected/bf65af35-1e80-49a0-ada2-3bd027193193-kube-api-access-7mkgk\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.856860 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bf65af35-1e80-49a0-ada2-3bd027193193-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.857245 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/bf65af35-1e80-49a0-ada2-3bd027193193-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.857824 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bf65af35-1e80-49a0-ada2-3bd027193193-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.859115 4810 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.859160 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e63bc62ea909687cb5abb0c5cf8da7008d795f1441aaff1987b707a42a388027/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.859195 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/bf65af35-1e80-49a0-ada2-3bd027193193-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.862124 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.862465 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bf65af35-1e80-49a0-ada2-3bd027193193-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.862700 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.863426 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.863485 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.863809 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bf65af35-1e80-49a0-ada2-3bd027193193-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.867759 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.867850 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf65af35-1e80-49a0-ada2-3bd027193193-config\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.889173 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mkgk\" (UniqueName: \"kubernetes.io/projected/bf65af35-1e80-49a0-ada2-3bd027193193-kube-api-access-7mkgk\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.906194 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b3b143f1-488b-49bf-8792-af0d760f341e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3b143f1-488b-49bf-8792-af0d760f341e\") pod \"prometheus-metric-storage-0\" (UID: \"bf65af35-1e80-49a0-ada2-3bd027193193\") " pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:44 crc kubenswrapper[4810]: I0219 15:56:44.934315 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 15:56:45 crc kubenswrapper[4810]: I0219 15:56:45.430708 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 15:56:45 crc kubenswrapper[4810]: W0219 15:56:45.442879 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf65af35_1e80_49a0_ada2_3bd027193193.slice/crio-5c2643d70b93f742d55230bda1d4d99a26796b1faec3f2d8cd40094f4cdf3174 WatchSource:0}: Error finding container 5c2643d70b93f742d55230bda1d4d99a26796b1faec3f2d8cd40094f4cdf3174: Status 404 returned error can't find the container with id 5c2643d70b93f742d55230bda1d4d99a26796b1faec3f2d8cd40094f4cdf3174 Feb 19 15:56:45 crc kubenswrapper[4810]: I0219 15:56:45.451175 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c213a3a-78fd-4b42-bc1c-e09837eae684" path="/var/lib/kubelet/pods/5c213a3a-78fd-4b42-bc1c-e09837eae684/volumes" Feb 19 15:56:45 crc kubenswrapper[4810]: I0219 15:56:45.554341 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bf65af35-1e80-49a0-ada2-3bd027193193","Type":"ContainerStarted","Data":"5c2643d70b93f742d55230bda1d4d99a26796b1faec3f2d8cd40094f4cdf3174"} Feb 19 15:56:50 crc kubenswrapper[4810]: I0219 15:56:50.615866 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bf65af35-1e80-49a0-ada2-3bd027193193","Type":"ContainerStarted","Data":"1beb7acac6bd1bedd6051f55579bb2b30e8e57731225978bffc636370ec9ae34"} Feb 19 15:56:59 crc kubenswrapper[4810]: I0219 15:56:59.736576 4810 generic.go:334] "Generic (PLEG): container finished" podID="bf65af35-1e80-49a0-ada2-3bd027193193" containerID="1beb7acac6bd1bedd6051f55579bb2b30e8e57731225978bffc636370ec9ae34" exitCode=0 Feb 19 15:56:59 crc kubenswrapper[4810]: I0219 15:56:59.736692 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bf65af35-1e80-49a0-ada2-3bd027193193","Type":"ContainerDied","Data":"1beb7acac6bd1bedd6051f55579bb2b30e8e57731225978bffc636370ec9ae34"} Feb 19 15:57:00 crc kubenswrapper[4810]: I0219 15:57:00.749303 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bf65af35-1e80-49a0-ada2-3bd027193193","Type":"ContainerStarted","Data":"e2bf6727a3974e363277c9433eaf103264295c66ded7f90846b6564632582353"} Feb 19 15:57:05 crc kubenswrapper[4810]: I0219 15:57:05.811630 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bf65af35-1e80-49a0-ada2-3bd027193193","Type":"ContainerStarted","Data":"e50299ea42b7be305f4cb7128ef81136eaa54995403b6f0bb84bd2027eedad79"} Feb 19 15:57:05 crc kubenswrapper[4810]: I0219 15:57:05.812411 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bf65af35-1e80-49a0-ada2-3bd027193193","Type":"ContainerStarted","Data":"f28b0f870796240cb37e08bd8fd4ba28ddff6279c418df9a1a6cdb0e1602bde6"} Feb 19 15:57:05 crc kubenswrapper[4810]: I0219 15:57:05.863267 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=21.863234206 podStartE2EDuration="21.863234206s" podCreationTimestamp="2026-02-19 15:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 15:57:05.845222278 +0000 UTC m=+2855.327252472" watchObservedRunningTime="2026-02-19 15:57:05.863234206 +0000 UTC m=+2855.345264370" Feb 19 15:57:09 crc kubenswrapper[4810]: I0219 15:57:09.936010 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 15:57:14 crc kubenswrapper[4810]: I0219 15:57:14.935973 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 15:57:14 crc kubenswrapper[4810]: I0219 15:57:14.946854 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 15:57:15 crc kubenswrapper[4810]: I0219 15:57:15.934478 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 15:57:32 crc kubenswrapper[4810]: I0219 15:57:32.970508 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 15:57:32 crc kubenswrapper[4810]: I0219 15:57:32.974492 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:32.991357 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.009386 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.009711 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.009959 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.010095 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hvstp" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.047400 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a4c017a9-c049-4baa-acc0-e08a25437c90-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.047472 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.047570 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.047738 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.047806 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4c017a9-c049-4baa-acc0-e08a25437c90-config-data\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.047895 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4c017a9-c049-4baa-acc0-e08a25437c90-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.047973 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.048110 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a4c017a9-c049-4baa-acc0-e08a25437c90-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.048259 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv5x5\" (UniqueName: \"kubernetes.io/projected/a4c017a9-c049-4baa-acc0-e08a25437c90-kube-api-access-tv5x5\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.150714 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv5x5\" (UniqueName: \"kubernetes.io/projected/a4c017a9-c049-4baa-acc0-e08a25437c90-kube-api-access-tv5x5\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.150872 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a4c017a9-c049-4baa-acc0-e08a25437c90-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.150931 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.151024 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.151236 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.151365 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4c017a9-c049-4baa-acc0-e08a25437c90-config-data\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.151431 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.151461 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4c017a9-c049-4baa-acc0-e08a25437c90-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.151534 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.151610 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a4c017a9-c049-4baa-acc0-e08a25437c90-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.152973 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a4c017a9-c049-4baa-acc0-e08a25437c90-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.153315 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a4c017a9-c049-4baa-acc0-e08a25437c90-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.153781 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4c017a9-c049-4baa-acc0-e08a25437c90-config-data\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.154306 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4c017a9-c049-4baa-acc0-e08a25437c90-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.163240 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.168909 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.170132 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.179540 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv5x5\" (UniqueName: \"kubernetes.io/projected/a4c017a9-c049-4baa-acc0-e08a25437c90-kube-api-access-tv5x5\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.201586 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.334176 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 15:57:33 crc kubenswrapper[4810]: I0219 15:57:33.808614 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 15:57:33 crc kubenswrapper[4810]: W0219 15:57:33.817059 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4c017a9_c049_4baa_acc0_e08a25437c90.slice/crio-bc53d602c1ff8998f515238b989c9bf62ab660f688576459213d5fed2a632fb2 WatchSource:0}: Error finding container bc53d602c1ff8998f515238b989c9bf62ab660f688576459213d5fed2a632fb2: Status 404 returned error can't find the container with id bc53d602c1ff8998f515238b989c9bf62ab660f688576459213d5fed2a632fb2 Feb 19 15:57:34 crc kubenswrapper[4810]: I0219 15:57:34.130763 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a4c017a9-c049-4baa-acc0-e08a25437c90","Type":"ContainerStarted","Data":"bc53d602c1ff8998f515238b989c9bf62ab660f688576459213d5fed2a632fb2"} Feb 19 15:57:44 crc kubenswrapper[4810]: I0219 15:57:44.253871 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a4c017a9-c049-4baa-acc0-e08a25437c90","Type":"ContainerStarted","Data":"7c1119346fc25e4a8eb25c191fe4eed4ab3589389debf84e048e3e376479d897"} Feb 19 15:57:44 crc kubenswrapper[4810]: I0219 15:57:44.286535 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.497704982 podStartE2EDuration="13.286507051s" podCreationTimestamp="2026-02-19 15:57:31 +0000 UTC" firstStartedPulling="2026-02-19 15:57:33.820892829 +0000 UTC m=+2883.302922963" lastFinishedPulling="2026-02-19 15:57:42.609694898 +0000 UTC m=+2892.091725032" observedRunningTime="2026-02-19 15:57:44.275482827 +0000 UTC m=+2893.757512991" watchObservedRunningTime="2026-02-19 15:57:44.286507051 +0000 UTC m=+2893.768537215" Feb 19 15:57:49 crc kubenswrapper[4810]: I0219 15:57:49.538039 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:57:49 crc kubenswrapper[4810]: I0219 15:57:49.538635 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:58:19 crc kubenswrapper[4810]: I0219 15:58:19.538000 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:58:19 crc kubenswrapper[4810]: I0219 15:58:19.538598 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:58:25 crc kubenswrapper[4810]: E0219 15:58:25.035168 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Feb 19 15:58:49 crc kubenswrapper[4810]: I0219 15:58:49.538269 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 15:58:49 crc kubenswrapper[4810]: I0219 15:58:49.538780 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 15:58:49 crc kubenswrapper[4810]: I0219 15:58:49.538838 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 15:58:49 crc kubenswrapper[4810]: I0219 15:58:49.539806 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 15:58:49 crc kubenswrapper[4810]: I0219 15:58:49.539875 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" gracePeriod=600 Feb 19 15:58:49 crc kubenswrapper[4810]: E0219 15:58:49.673318 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:58:50 crc kubenswrapper[4810]: I0219 15:58:50.586589 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" exitCode=0 Feb 19 15:58:50 crc kubenswrapper[4810]: I0219 15:58:50.586636 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09"} Feb 19 15:58:50 crc kubenswrapper[4810]: I0219 15:58:50.586888 4810 scope.go:117] "RemoveContainer" containerID="6287fea84a36b36e84360e8242ff92d8a387529e6d7bde8d2a6410fbcb262896" Feb 19 15:58:50 crc kubenswrapper[4810]: I0219 15:58:50.587844 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 15:58:50 crc kubenswrapper[4810]: E0219 15:58:50.588137 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:59:06 crc kubenswrapper[4810]: I0219 15:59:06.442402 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 15:59:06 crc kubenswrapper[4810]: E0219 15:59:06.443952 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:59:21 crc kubenswrapper[4810]: I0219 15:59:21.448783 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 15:59:21 crc kubenswrapper[4810]: E0219 15:59:21.449909 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:59:35 crc kubenswrapper[4810]: I0219 15:59:35.439542 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 15:59:35 crc kubenswrapper[4810]: E0219 15:59:35.440787 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 15:59:50 crc kubenswrapper[4810]: I0219 15:59:50.439901 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 15:59:50 crc kubenswrapper[4810]: E0219 15:59:50.440676 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.152679 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd"] Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.154281 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.157512 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.158753 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.168556 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd"] Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.281402 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb7fb\" (UniqueName: \"kubernetes.io/projected/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-kube-api-access-gb7fb\") pod \"collect-profiles-29525280-gjtpd\" (UID: \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.281471 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-config-volume\") pod \"collect-profiles-29525280-gjtpd\" (UID: \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.281531 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-secret-volume\") pod \"collect-profiles-29525280-gjtpd\" (UID: \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.384986 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-secret-volume\") pod \"collect-profiles-29525280-gjtpd\" (UID: \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.385234 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb7fb\" (UniqueName: \"kubernetes.io/projected/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-kube-api-access-gb7fb\") pod \"collect-profiles-29525280-gjtpd\" (UID: \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.385315 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-config-volume\") pod \"collect-profiles-29525280-gjtpd\" (UID: \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.386741 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-config-volume\") pod \"collect-profiles-29525280-gjtpd\" (UID: \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.396192 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-secret-volume\") pod \"collect-profiles-29525280-gjtpd\" (UID: \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.408472 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb7fb\" (UniqueName: \"kubernetes.io/projected/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-kube-api-access-gb7fb\") pod \"collect-profiles-29525280-gjtpd\" (UID: \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.495660 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" Feb 19 16:00:00 crc kubenswrapper[4810]: I0219 16:00:00.993839 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd"] Feb 19 16:00:00 crc kubenswrapper[4810]: W0219 16:00:00.997945 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1327b7dc_e5ad_463c_8ca9_89b735b1fec2.slice/crio-f0260a0d3be5bef0c92553d74849a54156c06eacc6e7908dad8600c08065b3d3 WatchSource:0}: Error finding container f0260a0d3be5bef0c92553d74849a54156c06eacc6e7908dad8600c08065b3d3: Status 404 returned error can't find the container with id f0260a0d3be5bef0c92553d74849a54156c06eacc6e7908dad8600c08065b3d3 Feb 19 16:00:01 crc kubenswrapper[4810]: I0219 16:00:01.470395 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" event={"ID":"1327b7dc-e5ad-463c-8ca9-89b735b1fec2","Type":"ContainerStarted","Data":"4e99f6d1c84e426443c4e13972b553f91aa0857b582f33dd75b9fc978d8acc56"} Feb 19 16:00:01 crc kubenswrapper[4810]: I0219 16:00:01.470752 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" event={"ID":"1327b7dc-e5ad-463c-8ca9-89b735b1fec2","Type":"ContainerStarted","Data":"f0260a0d3be5bef0c92553d74849a54156c06eacc6e7908dad8600c08065b3d3"} Feb 19 16:00:01 crc kubenswrapper[4810]: I0219 16:00:01.542964 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" podStartSLOduration=1.542937405 podStartE2EDuration="1.542937405s" podCreationTimestamp="2026-02-19 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 16:00:01.514763644 +0000 UTC m=+3030.996793798" watchObservedRunningTime="2026-02-19 16:00:01.542937405 +0000 UTC m=+3031.024967569" Feb 19 16:00:02 crc kubenswrapper[4810]: I0219 16:00:02.460169 4810 generic.go:334] "Generic (PLEG): container finished" podID="1327b7dc-e5ad-463c-8ca9-89b735b1fec2" containerID="4e99f6d1c84e426443c4e13972b553f91aa0857b582f33dd75b9fc978d8acc56" exitCode=0 Feb 19 16:00:02 crc kubenswrapper[4810]: I0219 16:00:02.460247 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" event={"ID":"1327b7dc-e5ad-463c-8ca9-89b735b1fec2","Type":"ContainerDied","Data":"4e99f6d1c84e426443c4e13972b553f91aa0857b582f33dd75b9fc978d8acc56"} Feb 19 16:00:03 crc kubenswrapper[4810]: I0219 16:00:03.892567 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" Feb 19 16:00:03 crc kubenswrapper[4810]: I0219 16:00:03.970313 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb7fb\" (UniqueName: \"kubernetes.io/projected/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-kube-api-access-gb7fb\") pod \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\" (UID: \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\") " Feb 19 16:00:03 crc kubenswrapper[4810]: I0219 16:00:03.970431 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-secret-volume\") pod \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\" (UID: \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\") " Feb 19 16:00:03 crc kubenswrapper[4810]: I0219 16:00:03.970673 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-config-volume\") pod \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\" (UID: \"1327b7dc-e5ad-463c-8ca9-89b735b1fec2\") " Feb 19 16:00:03 crc kubenswrapper[4810]: I0219 16:00:03.972308 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-config-volume" (OuterVolumeSpecName: "config-volume") pod "1327b7dc-e5ad-463c-8ca9-89b735b1fec2" (UID: "1327b7dc-e5ad-463c-8ca9-89b735b1fec2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 16:00:03 crc kubenswrapper[4810]: I0219 16:00:03.979596 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1327b7dc-e5ad-463c-8ca9-89b735b1fec2" (UID: "1327b7dc-e5ad-463c-8ca9-89b735b1fec2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:00:03 crc kubenswrapper[4810]: I0219 16:00:03.980293 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-kube-api-access-gb7fb" (OuterVolumeSpecName: "kube-api-access-gb7fb") pod "1327b7dc-e5ad-463c-8ca9-89b735b1fec2" (UID: "1327b7dc-e5ad-463c-8ca9-89b735b1fec2"). InnerVolumeSpecName "kube-api-access-gb7fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:00:04 crc kubenswrapper[4810]: I0219 16:00:04.073506 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb7fb\" (UniqueName: \"kubernetes.io/projected/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-kube-api-access-gb7fb\") on node \"crc\" DevicePath \"\"" Feb 19 16:00:04 crc kubenswrapper[4810]: I0219 16:00:04.074153 4810 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 16:00:04 crc kubenswrapper[4810]: I0219 16:00:04.074179 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1327b7dc-e5ad-463c-8ca9-89b735b1fec2-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 16:00:04 crc kubenswrapper[4810]: I0219 16:00:04.488901 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" event={"ID":"1327b7dc-e5ad-463c-8ca9-89b735b1fec2","Type":"ContainerDied","Data":"f0260a0d3be5bef0c92553d74849a54156c06eacc6e7908dad8600c08065b3d3"} Feb 19 16:00:04 crc kubenswrapper[4810]: I0219 16:00:04.488938 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0260a0d3be5bef0c92553d74849a54156c06eacc6e7908dad8600c08065b3d3" Feb 19 16:00:04 crc kubenswrapper[4810]: I0219 16:00:04.488997 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd" Feb 19 16:00:04 crc kubenswrapper[4810]: I0219 16:00:04.990173 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg"] Feb 19 16:00:05 crc kubenswrapper[4810]: I0219 16:00:05.002633 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525235-vk2jg"] Feb 19 16:00:05 crc kubenswrapper[4810]: I0219 16:00:05.440046 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:00:05 crc kubenswrapper[4810]: E0219 16:00:05.440530 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:00:05 crc kubenswrapper[4810]: I0219 16:00:05.462603 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f3fa539-f490-4b25-b592-d199cc757b8a" path="/var/lib/kubelet/pods/2f3fa539-f490-4b25-b592-d199cc757b8a/volumes" Feb 19 16:00:19 crc kubenswrapper[4810]: I0219 16:00:19.441416 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:00:19 crc kubenswrapper[4810]: E0219 16:00:19.442753 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:00:32 crc kubenswrapper[4810]: I0219 16:00:32.440647 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:00:32 crc kubenswrapper[4810]: E0219 16:00:32.441466 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:00:43 crc kubenswrapper[4810]: I0219 16:00:43.150698 4810 scope.go:117] "RemoveContainer" containerID="88bb50e9f73c36f14be470f12c634b9e24a853c43e328fa253f758c7a485a40a" Feb 19 16:00:44 crc kubenswrapper[4810]: I0219 16:00:44.439926 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:00:44 crc kubenswrapper[4810]: E0219 16:00:44.441285 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:00:55 crc kubenswrapper[4810]: I0219 16:00:55.439923 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:00:55 crc kubenswrapper[4810]: E0219 16:00:55.441827 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.159409 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29525281-26qqv"] Feb 19 16:01:00 crc kubenswrapper[4810]: E0219 16:01:00.163018 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1327b7dc-e5ad-463c-8ca9-89b735b1fec2" containerName="collect-profiles" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.163070 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1327b7dc-e5ad-463c-8ca9-89b735b1fec2" containerName="collect-profiles" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.165093 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1327b7dc-e5ad-463c-8ca9-89b735b1fec2" containerName="collect-profiles" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.167854 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.195539 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-config-data\") pod \"keystone-cron-29525281-26qqv\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.195724 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smlzq\" (UniqueName: \"kubernetes.io/projected/8984eff3-6c82-4e2f-8bd6-1e820a450874-kube-api-access-smlzq\") pod \"keystone-cron-29525281-26qqv\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.195823 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-combined-ca-bundle\") pod \"keystone-cron-29525281-26qqv\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.196018 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-fernet-keys\") pod \"keystone-cron-29525281-26qqv\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.197069 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525281-26qqv"] Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.298203 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-fernet-keys\") pod \"keystone-cron-29525281-26qqv\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.298427 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-config-data\") pod \"keystone-cron-29525281-26qqv\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.298529 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smlzq\" (UniqueName: \"kubernetes.io/projected/8984eff3-6c82-4e2f-8bd6-1e820a450874-kube-api-access-smlzq\") pod \"keystone-cron-29525281-26qqv\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.298623 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-combined-ca-bundle\") pod \"keystone-cron-29525281-26qqv\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.307367 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-fernet-keys\") pod \"keystone-cron-29525281-26qqv\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.308591 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-config-data\") pod \"keystone-cron-29525281-26qqv\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.309729 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-combined-ca-bundle\") pod \"keystone-cron-29525281-26qqv\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.319193 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smlzq\" (UniqueName: \"kubernetes.io/projected/8984eff3-6c82-4e2f-8bd6-1e820a450874-kube-api-access-smlzq\") pod \"keystone-cron-29525281-26qqv\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.510546 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:00 crc kubenswrapper[4810]: I0219 16:01:00.792289 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525281-26qqv"] Feb 19 16:01:01 crc kubenswrapper[4810]: I0219 16:01:01.124509 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525281-26qqv" event={"ID":"8984eff3-6c82-4e2f-8bd6-1e820a450874","Type":"ContainerStarted","Data":"917515301fc786ad918811d2b3e221a682d1b36e815ac4c50924d29590c00e46"} Feb 19 16:01:01 crc kubenswrapper[4810]: I0219 16:01:01.124583 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525281-26qqv" event={"ID":"8984eff3-6c82-4e2f-8bd6-1e820a450874","Type":"ContainerStarted","Data":"7f756605f4dd1141e04daa9451f347481858b07b73d74e10611d02aa8b166bdd"} Feb 19 16:01:01 crc kubenswrapper[4810]: I0219 16:01:01.156166 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29525281-26qqv" podStartSLOduration=1.156139913 podStartE2EDuration="1.156139913s" podCreationTimestamp="2026-02-19 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 16:01:01.15481363 +0000 UTC m=+3090.636843754" watchObservedRunningTime="2026-02-19 16:01:01.156139913 +0000 UTC m=+3090.638170077" Feb 19 16:01:04 crc kubenswrapper[4810]: I0219 16:01:04.173723 4810 generic.go:334] "Generic (PLEG): container finished" podID="8984eff3-6c82-4e2f-8bd6-1e820a450874" containerID="917515301fc786ad918811d2b3e221a682d1b36e815ac4c50924d29590c00e46" exitCode=0 Feb 19 16:01:04 crc kubenswrapper[4810]: I0219 16:01:04.173811 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525281-26qqv" event={"ID":"8984eff3-6c82-4e2f-8bd6-1e820a450874","Type":"ContainerDied","Data":"917515301fc786ad918811d2b3e221a682d1b36e815ac4c50924d29590c00e46"} Feb 19 16:01:05 crc kubenswrapper[4810]: I0219 16:01:05.634166 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:05 crc kubenswrapper[4810]: I0219 16:01:05.818507 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-combined-ca-bundle\") pod \"8984eff3-6c82-4e2f-8bd6-1e820a450874\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " Feb 19 16:01:05 crc kubenswrapper[4810]: I0219 16:01:05.818748 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-fernet-keys\") pod \"8984eff3-6c82-4e2f-8bd6-1e820a450874\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " Feb 19 16:01:05 crc kubenswrapper[4810]: I0219 16:01:05.818992 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smlzq\" (UniqueName: \"kubernetes.io/projected/8984eff3-6c82-4e2f-8bd6-1e820a450874-kube-api-access-smlzq\") pod \"8984eff3-6c82-4e2f-8bd6-1e820a450874\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " Feb 19 16:01:05 crc kubenswrapper[4810]: I0219 16:01:05.819233 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-config-data\") pod \"8984eff3-6c82-4e2f-8bd6-1e820a450874\" (UID: \"8984eff3-6c82-4e2f-8bd6-1e820a450874\") " Feb 19 16:01:05 crc kubenswrapper[4810]: I0219 16:01:05.825695 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8984eff3-6c82-4e2f-8bd6-1e820a450874-kube-api-access-smlzq" (OuterVolumeSpecName: "kube-api-access-smlzq") pod "8984eff3-6c82-4e2f-8bd6-1e820a450874" (UID: "8984eff3-6c82-4e2f-8bd6-1e820a450874"). InnerVolumeSpecName "kube-api-access-smlzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:01:05 crc kubenswrapper[4810]: I0219 16:01:05.827381 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8984eff3-6c82-4e2f-8bd6-1e820a450874" (UID: "8984eff3-6c82-4e2f-8bd6-1e820a450874"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:01:05 crc kubenswrapper[4810]: I0219 16:01:05.865059 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8984eff3-6c82-4e2f-8bd6-1e820a450874" (UID: "8984eff3-6c82-4e2f-8bd6-1e820a450874"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:01:05 crc kubenswrapper[4810]: I0219 16:01:05.890179 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-config-data" (OuterVolumeSpecName: "config-data") pod "8984eff3-6c82-4e2f-8bd6-1e820a450874" (UID: "8984eff3-6c82-4e2f-8bd6-1e820a450874"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:01:05 crc kubenswrapper[4810]: I0219 16:01:05.922135 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 16:01:05 crc kubenswrapper[4810]: I0219 16:01:05.922169 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 16:01:05 crc kubenswrapper[4810]: I0219 16:01:05.922181 4810 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8984eff3-6c82-4e2f-8bd6-1e820a450874-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 16:01:05 crc kubenswrapper[4810]: I0219 16:01:05.922190 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smlzq\" (UniqueName: \"kubernetes.io/projected/8984eff3-6c82-4e2f-8bd6-1e820a450874-kube-api-access-smlzq\") on node \"crc\" DevicePath \"\"" Feb 19 16:01:06 crc kubenswrapper[4810]: I0219 16:01:06.195758 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525281-26qqv" event={"ID":"8984eff3-6c82-4e2f-8bd6-1e820a450874","Type":"ContainerDied","Data":"7f756605f4dd1141e04daa9451f347481858b07b73d74e10611d02aa8b166bdd"} Feb 19 16:01:06 crc kubenswrapper[4810]: I0219 16:01:06.195800 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f756605f4dd1141e04daa9451f347481858b07b73d74e10611d02aa8b166bdd" Feb 19 16:01:06 crc kubenswrapper[4810]: I0219 16:01:06.195908 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525281-26qqv" Feb 19 16:01:07 crc kubenswrapper[4810]: I0219 16:01:07.439357 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:01:07 crc kubenswrapper[4810]: E0219 16:01:07.440235 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:01:20 crc kubenswrapper[4810]: I0219 16:01:20.444283 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:01:20 crc kubenswrapper[4810]: E0219 16:01:20.445094 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:01:32 crc kubenswrapper[4810]: I0219 16:01:32.440666 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:01:32 crc kubenswrapper[4810]: E0219 16:01:32.442089 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:01:46 crc kubenswrapper[4810]: I0219 16:01:46.441004 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:01:46 crc kubenswrapper[4810]: E0219 16:01:46.441999 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:01:59 crc kubenswrapper[4810]: I0219 16:01:59.439980 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:01:59 crc kubenswrapper[4810]: E0219 16:01:59.441108 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:02:13 crc kubenswrapper[4810]: I0219 16:02:13.439005 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:02:13 crc kubenswrapper[4810]: E0219 16:02:13.439837 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:02:25 crc kubenswrapper[4810]: I0219 16:02:25.440526 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:02:25 crc kubenswrapper[4810]: E0219 16:02:25.441367 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:02:37 crc kubenswrapper[4810]: I0219 16:02:37.440293 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:02:37 crc kubenswrapper[4810]: E0219 16:02:37.442185 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:02:50 crc kubenswrapper[4810]: I0219 16:02:50.439309 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:02:50 crc kubenswrapper[4810]: E0219 16:02:50.440297 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:03:02 crc kubenswrapper[4810]: I0219 16:03:02.439943 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:03:02 crc kubenswrapper[4810]: E0219 16:03:02.441099 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:03:14 crc kubenswrapper[4810]: I0219 16:03:14.439523 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:03:14 crc kubenswrapper[4810]: E0219 16:03:14.440633 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:03:26 crc kubenswrapper[4810]: I0219 16:03:26.440365 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:03:26 crc kubenswrapper[4810]: E0219 16:03:26.441089 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:03:40 crc kubenswrapper[4810]: I0219 16:03:40.440878 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:03:40 crc kubenswrapper[4810]: E0219 16:03:40.442410 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:03:53 crc kubenswrapper[4810]: I0219 16:03:53.439954 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:03:54 crc kubenswrapper[4810]: I0219 16:03:54.616974 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"2578d4284c44e38a7496c83ee59e7a00d386a9b2aebd20063f610b39b9a8d15a"} Feb 19 16:06:19 crc kubenswrapper[4810]: I0219 16:06:19.537014 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:06:19 crc kubenswrapper[4810]: I0219 16:06:19.537547 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.189203 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-78p9d"] Feb 19 16:06:44 crc kubenswrapper[4810]: E0219 16:06:44.191319 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8984eff3-6c82-4e2f-8bd6-1e820a450874" containerName="keystone-cron" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.192598 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8984eff3-6c82-4e2f-8bd6-1e820a450874" containerName="keystone-cron" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.194686 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="8984eff3-6c82-4e2f-8bd6-1e820a450874" containerName="keystone-cron" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.206038 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.241357 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78p9d"] Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.272148 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4070bcdc-bd83-4c82-920b-8cd10671c498-utilities\") pod \"community-operators-78p9d\" (UID: \"4070bcdc-bd83-4c82-920b-8cd10671c498\") " pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.272657 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4070bcdc-bd83-4c82-920b-8cd10671c498-catalog-content\") pod \"community-operators-78p9d\" (UID: \"4070bcdc-bd83-4c82-920b-8cd10671c498\") " pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.272762 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88slz\" (UniqueName: \"kubernetes.io/projected/4070bcdc-bd83-4c82-920b-8cd10671c498-kube-api-access-88slz\") pod \"community-operators-78p9d\" (UID: \"4070bcdc-bd83-4c82-920b-8cd10671c498\") " pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.374304 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4070bcdc-bd83-4c82-920b-8cd10671c498-catalog-content\") pod \"community-operators-78p9d\" (UID: \"4070bcdc-bd83-4c82-920b-8cd10671c498\") " pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.374430 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88slz\" (UniqueName: \"kubernetes.io/projected/4070bcdc-bd83-4c82-920b-8cd10671c498-kube-api-access-88slz\") pod \"community-operators-78p9d\" (UID: \"4070bcdc-bd83-4c82-920b-8cd10671c498\") " pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.374501 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4070bcdc-bd83-4c82-920b-8cd10671c498-utilities\") pod \"community-operators-78p9d\" (UID: \"4070bcdc-bd83-4c82-920b-8cd10671c498\") " pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.374961 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4070bcdc-bd83-4c82-920b-8cd10671c498-catalog-content\") pod \"community-operators-78p9d\" (UID: \"4070bcdc-bd83-4c82-920b-8cd10671c498\") " pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.374981 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4070bcdc-bd83-4c82-920b-8cd10671c498-utilities\") pod \"community-operators-78p9d\" (UID: \"4070bcdc-bd83-4c82-920b-8cd10671c498\") " pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.403719 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88slz\" (UniqueName: \"kubernetes.io/projected/4070bcdc-bd83-4c82-920b-8cd10671c498-kube-api-access-88slz\") pod \"community-operators-78p9d\" (UID: \"4070bcdc-bd83-4c82-920b-8cd10671c498\") " pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:06:44 crc kubenswrapper[4810]: I0219 16:06:44.547109 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:06:45 crc kubenswrapper[4810]: I0219 16:06:45.144385 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78p9d"] Feb 19 16:06:45 crc kubenswrapper[4810]: I0219 16:06:45.499038 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78p9d" event={"ID":"4070bcdc-bd83-4c82-920b-8cd10671c498","Type":"ContainerStarted","Data":"0c95ccc80153fc2aa4312b67f7d6c9a87f9473946f133fded513db482a269be4"} Feb 19 16:06:46 crc kubenswrapper[4810]: I0219 16:06:46.517185 4810 generic.go:334] "Generic (PLEG): container finished" podID="4070bcdc-bd83-4c82-920b-8cd10671c498" containerID="ab74f7f85dc91d920ea16df4b6ea4adb1452fa6de4e5f98251385b261184b652" exitCode=0 Feb 19 16:06:46 crc kubenswrapper[4810]: I0219 16:06:46.517281 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78p9d" event={"ID":"4070bcdc-bd83-4c82-920b-8cd10671c498","Type":"ContainerDied","Data":"ab74f7f85dc91d920ea16df4b6ea4adb1452fa6de4e5f98251385b261184b652"} Feb 19 16:06:46 crc kubenswrapper[4810]: I0219 16:06:46.523102 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 16:06:49 crc kubenswrapper[4810]: I0219 16:06:49.537894 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:06:49 crc kubenswrapper[4810]: I0219 16:06:49.538514 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:06:53 crc kubenswrapper[4810]: I0219 16:06:53.586297 4810 generic.go:334] "Generic (PLEG): container finished" podID="4070bcdc-bd83-4c82-920b-8cd10671c498" containerID="702b9f8e53344f3c21f1cd920ff8a37a750c3a8370aa348e1d772d912bfa2ac0" exitCode=0 Feb 19 16:06:53 crc kubenswrapper[4810]: I0219 16:06:53.586655 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78p9d" event={"ID":"4070bcdc-bd83-4c82-920b-8cd10671c498","Type":"ContainerDied","Data":"702b9f8e53344f3c21f1cd920ff8a37a750c3a8370aa348e1d772d912bfa2ac0"} Feb 19 16:06:55 crc kubenswrapper[4810]: I0219 16:06:55.617615 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78p9d" event={"ID":"4070bcdc-bd83-4c82-920b-8cd10671c498","Type":"ContainerStarted","Data":"2d183229f511b7d5a8cb0fb396e7514676855b732318632722a576d32921f68f"} Feb 19 16:06:55 crc kubenswrapper[4810]: I0219 16:06:55.645205 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-78p9d" podStartSLOduration=3.890360568 podStartE2EDuration="11.645186831s" podCreationTimestamp="2026-02-19 16:06:44 +0000 UTC" firstStartedPulling="2026-02-19 16:06:46.522700395 +0000 UTC m=+3436.004730549" lastFinishedPulling="2026-02-19 16:06:54.277526648 +0000 UTC m=+3443.759556812" observedRunningTime="2026-02-19 16:06:55.638526356 +0000 UTC m=+3445.120556480" watchObservedRunningTime="2026-02-19 16:06:55.645186831 +0000 UTC m=+3445.127216955" Feb 19 16:07:04 crc kubenswrapper[4810]: I0219 16:07:04.548196 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:07:04 crc kubenswrapper[4810]: I0219 16:07:04.549026 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:07:04 crc kubenswrapper[4810]: I0219 16:07:04.623826 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:07:04 crc kubenswrapper[4810]: I0219 16:07:04.792797 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:07:04 crc kubenswrapper[4810]: I0219 16:07:04.886756 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78p9d"] Feb 19 16:07:04 crc kubenswrapper[4810]: I0219 16:07:04.937001 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v68v6"] Feb 19 16:07:04 crc kubenswrapper[4810]: I0219 16:07:04.937221 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v68v6" podUID="935efdc2-5596-4207-a27b-68a8a39b6529" containerName="registry-server" containerID="cri-o://fdb2805a108d53dbae6af5f6571743f61b51164e75890b28023d46aa9db3310a" gracePeriod=2 Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.496445 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v68v6" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.612033 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935efdc2-5596-4207-a27b-68a8a39b6529-utilities\") pod \"935efdc2-5596-4207-a27b-68a8a39b6529\" (UID: \"935efdc2-5596-4207-a27b-68a8a39b6529\") " Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.612215 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlxt4\" (UniqueName: \"kubernetes.io/projected/935efdc2-5596-4207-a27b-68a8a39b6529-kube-api-access-dlxt4\") pod \"935efdc2-5596-4207-a27b-68a8a39b6529\" (UID: \"935efdc2-5596-4207-a27b-68a8a39b6529\") " Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.612272 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935efdc2-5596-4207-a27b-68a8a39b6529-catalog-content\") pod \"935efdc2-5596-4207-a27b-68a8a39b6529\" (UID: \"935efdc2-5596-4207-a27b-68a8a39b6529\") " Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.612701 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/935efdc2-5596-4207-a27b-68a8a39b6529-utilities" (OuterVolumeSpecName: "utilities") pod "935efdc2-5596-4207-a27b-68a8a39b6529" (UID: "935efdc2-5596-4207-a27b-68a8a39b6529"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.625054 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/935efdc2-5596-4207-a27b-68a8a39b6529-kube-api-access-dlxt4" (OuterVolumeSpecName: "kube-api-access-dlxt4") pod "935efdc2-5596-4207-a27b-68a8a39b6529" (UID: "935efdc2-5596-4207-a27b-68a8a39b6529"). InnerVolumeSpecName "kube-api-access-dlxt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.692679 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/935efdc2-5596-4207-a27b-68a8a39b6529-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "935efdc2-5596-4207-a27b-68a8a39b6529" (UID: "935efdc2-5596-4207-a27b-68a8a39b6529"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.714143 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935efdc2-5596-4207-a27b-68a8a39b6529-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.714169 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlxt4\" (UniqueName: \"kubernetes.io/projected/935efdc2-5596-4207-a27b-68a8a39b6529-kube-api-access-dlxt4\") on node \"crc\" DevicePath \"\"" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.714180 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935efdc2-5596-4207-a27b-68a8a39b6529-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.732139 4810 generic.go:334] "Generic (PLEG): container finished" podID="935efdc2-5596-4207-a27b-68a8a39b6529" containerID="fdb2805a108d53dbae6af5f6571743f61b51164e75890b28023d46aa9db3310a" exitCode=0 Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.733088 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v68v6" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.735862 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v68v6" event={"ID":"935efdc2-5596-4207-a27b-68a8a39b6529","Type":"ContainerDied","Data":"fdb2805a108d53dbae6af5f6571743f61b51164e75890b28023d46aa9db3310a"} Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.735941 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v68v6" event={"ID":"935efdc2-5596-4207-a27b-68a8a39b6529","Type":"ContainerDied","Data":"514f59846afe5a035371037e15fe72e417c83d08b819931a1c6ac5a2603476da"} Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.735966 4810 scope.go:117] "RemoveContainer" containerID="fdb2805a108d53dbae6af5f6571743f61b51164e75890b28023d46aa9db3310a" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.766920 4810 scope.go:117] "RemoveContainer" containerID="454b485efdf0b632d0bf68a7c004625d4bec4676c25fb0c84c862bf5d9832936" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.792944 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v68v6"] Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.796184 4810 scope.go:117] "RemoveContainer" containerID="1bf8e18430d52e1bdffbf7e432c09ccbb227c1161c0321cdc60f7b17956e0999" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.806347 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v68v6"] Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.846447 4810 scope.go:117] "RemoveContainer" containerID="fdb2805a108d53dbae6af5f6571743f61b51164e75890b28023d46aa9db3310a" Feb 19 16:07:05 crc kubenswrapper[4810]: E0219 16:07:05.847007 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdb2805a108d53dbae6af5f6571743f61b51164e75890b28023d46aa9db3310a\": container with ID starting with fdb2805a108d53dbae6af5f6571743f61b51164e75890b28023d46aa9db3310a not found: ID does not exist" containerID="fdb2805a108d53dbae6af5f6571743f61b51164e75890b28023d46aa9db3310a" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.847046 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdb2805a108d53dbae6af5f6571743f61b51164e75890b28023d46aa9db3310a"} err="failed to get container status \"fdb2805a108d53dbae6af5f6571743f61b51164e75890b28023d46aa9db3310a\": rpc error: code = NotFound desc = could not find container \"fdb2805a108d53dbae6af5f6571743f61b51164e75890b28023d46aa9db3310a\": container with ID starting with fdb2805a108d53dbae6af5f6571743f61b51164e75890b28023d46aa9db3310a not found: ID does not exist" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.847071 4810 scope.go:117] "RemoveContainer" containerID="454b485efdf0b632d0bf68a7c004625d4bec4676c25fb0c84c862bf5d9832936" Feb 19 16:07:05 crc kubenswrapper[4810]: E0219 16:07:05.847406 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454b485efdf0b632d0bf68a7c004625d4bec4676c25fb0c84c862bf5d9832936\": container with ID starting with 454b485efdf0b632d0bf68a7c004625d4bec4676c25fb0c84c862bf5d9832936 not found: ID does not exist" containerID="454b485efdf0b632d0bf68a7c004625d4bec4676c25fb0c84c862bf5d9832936" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.847435 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454b485efdf0b632d0bf68a7c004625d4bec4676c25fb0c84c862bf5d9832936"} err="failed to get container status \"454b485efdf0b632d0bf68a7c004625d4bec4676c25fb0c84c862bf5d9832936\": rpc error: code = NotFound desc = could not find container \"454b485efdf0b632d0bf68a7c004625d4bec4676c25fb0c84c862bf5d9832936\": container with ID starting with 454b485efdf0b632d0bf68a7c004625d4bec4676c25fb0c84c862bf5d9832936 not found: ID does not exist" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.847472 4810 scope.go:117] "RemoveContainer" containerID="1bf8e18430d52e1bdffbf7e432c09ccbb227c1161c0321cdc60f7b17956e0999" Feb 19 16:07:05 crc kubenswrapper[4810]: E0219 16:07:05.847845 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bf8e18430d52e1bdffbf7e432c09ccbb227c1161c0321cdc60f7b17956e0999\": container with ID starting with 1bf8e18430d52e1bdffbf7e432c09ccbb227c1161c0321cdc60f7b17956e0999 not found: ID does not exist" containerID="1bf8e18430d52e1bdffbf7e432c09ccbb227c1161c0321cdc60f7b17956e0999" Feb 19 16:07:05 crc kubenswrapper[4810]: I0219 16:07:05.847875 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf8e18430d52e1bdffbf7e432c09ccbb227c1161c0321cdc60f7b17956e0999"} err="failed to get container status \"1bf8e18430d52e1bdffbf7e432c09ccbb227c1161c0321cdc60f7b17956e0999\": rpc error: code = NotFound desc = could not find container \"1bf8e18430d52e1bdffbf7e432c09ccbb227c1161c0321cdc60f7b17956e0999\": container with ID starting with 1bf8e18430d52e1bdffbf7e432c09ccbb227c1161c0321cdc60f7b17956e0999 not found: ID does not exist" Feb 19 16:07:07 crc kubenswrapper[4810]: I0219 16:07:07.455971 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="935efdc2-5596-4207-a27b-68a8a39b6529" path="/var/lib/kubelet/pods/935efdc2-5596-4207-a27b-68a8a39b6529/volumes" Feb 19 16:07:19 crc kubenswrapper[4810]: I0219 16:07:19.537720 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:07:19 crc kubenswrapper[4810]: I0219 16:07:19.538300 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:07:19 crc kubenswrapper[4810]: I0219 16:07:19.538364 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 16:07:19 crc kubenswrapper[4810]: I0219 16:07:19.539128 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2578d4284c44e38a7496c83ee59e7a00d386a9b2aebd20063f610b39b9a8d15a"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 16:07:19 crc kubenswrapper[4810]: I0219 16:07:19.539189 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://2578d4284c44e38a7496c83ee59e7a00d386a9b2aebd20063f610b39b9a8d15a" gracePeriod=600 Feb 19 16:07:19 crc kubenswrapper[4810]: I0219 16:07:19.895196 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="2578d4284c44e38a7496c83ee59e7a00d386a9b2aebd20063f610b39b9a8d15a" exitCode=0 Feb 19 16:07:19 crc kubenswrapper[4810]: I0219 16:07:19.895245 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"2578d4284c44e38a7496c83ee59e7a00d386a9b2aebd20063f610b39b9a8d15a"} Feb 19 16:07:19 crc kubenswrapper[4810]: I0219 16:07:19.895626 4810 scope.go:117] "RemoveContainer" containerID="c35ee8c1523e361fb9a2d8ec76f05b9bc5a2cc5f563a8013cd8c34860471cd09" Feb 19 16:07:20 crc kubenswrapper[4810]: I0219 16:07:20.914244 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e"} Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.054050 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2nf72"] Feb 19 16:07:29 crc kubenswrapper[4810]: E0219 16:07:29.055468 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935efdc2-5596-4207-a27b-68a8a39b6529" containerName="extract-utilities" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.055490 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="935efdc2-5596-4207-a27b-68a8a39b6529" containerName="extract-utilities" Feb 19 16:07:29 crc kubenswrapper[4810]: E0219 16:07:29.055521 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935efdc2-5596-4207-a27b-68a8a39b6529" containerName="registry-server" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.055532 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="935efdc2-5596-4207-a27b-68a8a39b6529" containerName="registry-server" Feb 19 16:07:29 crc kubenswrapper[4810]: E0219 16:07:29.055562 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935efdc2-5596-4207-a27b-68a8a39b6529" containerName="extract-content" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.055572 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="935efdc2-5596-4207-a27b-68a8a39b6529" containerName="extract-content" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.055903 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="935efdc2-5596-4207-a27b-68a8a39b6529" containerName="registry-server" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.058396 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.084549 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nf72"] Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.178659 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6de6280-d32b-4c86-b93b-4c0a06ee631a-catalog-content\") pod \"redhat-marketplace-2nf72\" (UID: \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\") " pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.179108 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6de6280-d32b-4c86-b93b-4c0a06ee631a-utilities\") pod \"redhat-marketplace-2nf72\" (UID: \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\") " pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.179498 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk984\" (UniqueName: \"kubernetes.io/projected/f6de6280-d32b-4c86-b93b-4c0a06ee631a-kube-api-access-rk984\") pod \"redhat-marketplace-2nf72\" (UID: \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\") " pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.281700 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6de6280-d32b-4c86-b93b-4c0a06ee631a-utilities\") pod \"redhat-marketplace-2nf72\" (UID: \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\") " pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.282113 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk984\" (UniqueName: \"kubernetes.io/projected/f6de6280-d32b-4c86-b93b-4c0a06ee631a-kube-api-access-rk984\") pod \"redhat-marketplace-2nf72\" (UID: \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\") " pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.282246 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6de6280-d32b-4c86-b93b-4c0a06ee631a-catalog-content\") pod \"redhat-marketplace-2nf72\" (UID: \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\") " pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.282528 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6de6280-d32b-4c86-b93b-4c0a06ee631a-utilities\") pod \"redhat-marketplace-2nf72\" (UID: \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\") " pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.282794 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6de6280-d32b-4c86-b93b-4c0a06ee631a-catalog-content\") pod \"redhat-marketplace-2nf72\" (UID: \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\") " pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.314298 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk984\" (UniqueName: \"kubernetes.io/projected/f6de6280-d32b-4c86-b93b-4c0a06ee631a-kube-api-access-rk984\") pod \"redhat-marketplace-2nf72\" (UID: \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\") " pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.430582 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:29 crc kubenswrapper[4810]: I0219 16:07:29.908109 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nf72"] Feb 19 16:07:30 crc kubenswrapper[4810]: I0219 16:07:30.026406 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nf72" event={"ID":"f6de6280-d32b-4c86-b93b-4c0a06ee631a","Type":"ContainerStarted","Data":"8253aabd5e0eccfe864a20e447838bea8c93e171ca8b4c9d578faed4739fe2a9"} Feb 19 16:07:31 crc kubenswrapper[4810]: I0219 16:07:31.039560 4810 generic.go:334] "Generic (PLEG): container finished" podID="f6de6280-d32b-4c86-b93b-4c0a06ee631a" containerID="b46f8abcb3e1715f798a9e99f09aa1eda0b33b52fe5bccf58044aa7a87c06e1c" exitCode=0 Feb 19 16:07:31 crc kubenswrapper[4810]: I0219 16:07:31.039655 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nf72" event={"ID":"f6de6280-d32b-4c86-b93b-4c0a06ee631a","Type":"ContainerDied","Data":"b46f8abcb3e1715f798a9e99f09aa1eda0b33b52fe5bccf58044aa7a87c06e1c"} Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.056000 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nf72" event={"ID":"f6de6280-d32b-4c86-b93b-4c0a06ee631a","Type":"ContainerStarted","Data":"05cc9a216a60a1d1ff50d8970ba8fe384e3037ab972f04c4b82084af60c6196b"} Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.460451 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jljh4"] Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.464445 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.479417 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jljh4"] Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.564352 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-utilities\") pod \"certified-operators-jljh4\" (UID: \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\") " pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.564396 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-catalog-content\") pod \"certified-operators-jljh4\" (UID: \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\") " pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.564424 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jss82\" (UniqueName: \"kubernetes.io/projected/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-kube-api-access-jss82\") pod \"certified-operators-jljh4\" (UID: \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\") " pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.666494 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-utilities\") pod \"certified-operators-jljh4\" (UID: \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\") " pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.666553 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-catalog-content\") pod \"certified-operators-jljh4\" (UID: \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\") " pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.666594 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jss82\" (UniqueName: \"kubernetes.io/projected/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-kube-api-access-jss82\") pod \"certified-operators-jljh4\" (UID: \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\") " pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.667092 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-utilities\") pod \"certified-operators-jljh4\" (UID: \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\") " pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.667152 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-catalog-content\") pod \"certified-operators-jljh4\" (UID: \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\") " pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.693231 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jss82\" (UniqueName: \"kubernetes.io/projected/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-kube-api-access-jss82\") pod \"certified-operators-jljh4\" (UID: \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\") " pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.789556 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.867276 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-64j86"] Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.869522 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.879260 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-64j86"] Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.971503 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7q7\" (UniqueName: \"kubernetes.io/projected/577659f8-9dbb-46d0-b2cb-80951550957f-kube-api-access-4w7q7\") pod \"redhat-operators-64j86\" (UID: \"577659f8-9dbb-46d0-b2cb-80951550957f\") " pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.971592 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577659f8-9dbb-46d0-b2cb-80951550957f-catalog-content\") pod \"redhat-operators-64j86\" (UID: \"577659f8-9dbb-46d0-b2cb-80951550957f\") " pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:07:32 crc kubenswrapper[4810]: I0219 16:07:32.971633 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577659f8-9dbb-46d0-b2cb-80951550957f-utilities\") pod \"redhat-operators-64j86\" (UID: \"577659f8-9dbb-46d0-b2cb-80951550957f\") " pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:07:33 crc kubenswrapper[4810]: I0219 16:07:33.073914 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w7q7\" (UniqueName: \"kubernetes.io/projected/577659f8-9dbb-46d0-b2cb-80951550957f-kube-api-access-4w7q7\") pod \"redhat-operators-64j86\" (UID: \"577659f8-9dbb-46d0-b2cb-80951550957f\") " pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:07:33 crc kubenswrapper[4810]: I0219 16:07:33.074018 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577659f8-9dbb-46d0-b2cb-80951550957f-catalog-content\") pod \"redhat-operators-64j86\" (UID: \"577659f8-9dbb-46d0-b2cb-80951550957f\") " pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:07:33 crc kubenswrapper[4810]: I0219 16:07:33.074068 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577659f8-9dbb-46d0-b2cb-80951550957f-utilities\") pod \"redhat-operators-64j86\" (UID: \"577659f8-9dbb-46d0-b2cb-80951550957f\") " pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:07:33 crc kubenswrapper[4810]: I0219 16:07:33.074681 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577659f8-9dbb-46d0-b2cb-80951550957f-utilities\") pod \"redhat-operators-64j86\" (UID: \"577659f8-9dbb-46d0-b2cb-80951550957f\") " pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:07:33 crc kubenswrapper[4810]: I0219 16:07:33.075615 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577659f8-9dbb-46d0-b2cb-80951550957f-catalog-content\") pod \"redhat-operators-64j86\" (UID: \"577659f8-9dbb-46d0-b2cb-80951550957f\") " pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:07:33 crc kubenswrapper[4810]: I0219 16:07:33.101475 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w7q7\" (UniqueName: \"kubernetes.io/projected/577659f8-9dbb-46d0-b2cb-80951550957f-kube-api-access-4w7q7\") pod \"redhat-operators-64j86\" (UID: \"577659f8-9dbb-46d0-b2cb-80951550957f\") " pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:07:33 crc kubenswrapper[4810]: I0219 16:07:33.283430 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:07:34 crc kubenswrapper[4810]: I0219 16:07:34.058633 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jljh4"] Feb 19 16:07:34 crc kubenswrapper[4810]: I0219 16:07:34.091725 4810 generic.go:334] "Generic (PLEG): container finished" podID="f6de6280-d32b-4c86-b93b-4c0a06ee631a" containerID="05cc9a216a60a1d1ff50d8970ba8fe384e3037ab972f04c4b82084af60c6196b" exitCode=0 Feb 19 16:07:34 crc kubenswrapper[4810]: I0219 16:07:34.091800 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nf72" event={"ID":"f6de6280-d32b-4c86-b93b-4c0a06ee631a","Type":"ContainerDied","Data":"05cc9a216a60a1d1ff50d8970ba8fe384e3037ab972f04c4b82084af60c6196b"} Feb 19 16:07:34 crc kubenswrapper[4810]: I0219 16:07:34.094836 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jljh4" event={"ID":"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd","Type":"ContainerStarted","Data":"3f2a36ddbb2854ba8d1bc8b3161a45126a4855b2d5a76816b0ab55f69aa7490f"} Feb 19 16:07:34 crc kubenswrapper[4810]: I0219 16:07:34.347666 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-64j86"] Feb 19 16:07:35 crc kubenswrapper[4810]: I0219 16:07:35.106138 4810 generic.go:334] "Generic (PLEG): container finished" podID="577659f8-9dbb-46d0-b2cb-80951550957f" containerID="0a997d370511dea75f129a7681651a204854d63f35705e130fcf606397cfec6f" exitCode=0 Feb 19 16:07:35 crc kubenswrapper[4810]: I0219 16:07:35.106306 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64j86" event={"ID":"577659f8-9dbb-46d0-b2cb-80951550957f","Type":"ContainerDied","Data":"0a997d370511dea75f129a7681651a204854d63f35705e130fcf606397cfec6f"} Feb 19 16:07:35 crc kubenswrapper[4810]: I0219 16:07:35.106586 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64j86" event={"ID":"577659f8-9dbb-46d0-b2cb-80951550957f","Type":"ContainerStarted","Data":"96a616bff386690da5f589c057f363366e36ad19ad36fdff70cbdf3bb4704f10"} Feb 19 16:07:35 crc kubenswrapper[4810]: I0219 16:07:35.108941 4810 generic.go:334] "Generic (PLEG): container finished" podID="c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" containerID="6fb2edb1a24f496d9ddd2804e3d51a61772791f54decbda879e1317be0fa6cc6" exitCode=0 Feb 19 16:07:35 crc kubenswrapper[4810]: I0219 16:07:35.109510 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jljh4" event={"ID":"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd","Type":"ContainerDied","Data":"6fb2edb1a24f496d9ddd2804e3d51a61772791f54decbda879e1317be0fa6cc6"} Feb 19 16:07:35 crc kubenswrapper[4810]: I0219 16:07:35.115550 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nf72" event={"ID":"f6de6280-d32b-4c86-b93b-4c0a06ee631a","Type":"ContainerStarted","Data":"d7bd439d8bc0d4dd1a6b8ad202c10491abd329175439774b81dd46530d917993"} Feb 19 16:07:35 crc kubenswrapper[4810]: I0219 16:07:35.175307 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2nf72" podStartSLOduration=2.44099654 podStartE2EDuration="6.175286507s" podCreationTimestamp="2026-02-19 16:07:29 +0000 UTC" firstStartedPulling="2026-02-19 16:07:31.042684499 +0000 UTC m=+3480.524714633" lastFinishedPulling="2026-02-19 16:07:34.776974476 +0000 UTC m=+3484.259004600" observedRunningTime="2026-02-19 16:07:35.169224207 +0000 UTC m=+3484.651254351" watchObservedRunningTime="2026-02-19 16:07:35.175286507 +0000 UTC m=+3484.657316631" Feb 19 16:07:38 crc kubenswrapper[4810]: I0219 16:07:38.150727 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jljh4" event={"ID":"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd","Type":"ContainerStarted","Data":"0436f9add9060712a8eadd4b9553ee23df2b08b125e5848fa1234b303b4f8c12"} Feb 19 16:07:38 crc kubenswrapper[4810]: I0219 16:07:38.154073 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64j86" event={"ID":"577659f8-9dbb-46d0-b2cb-80951550957f","Type":"ContainerStarted","Data":"b8577964a2dfe54d0a59674fdaf01d96e87490efbc3e6f7eab9d40ccbe5b4994"} Feb 19 16:07:39 crc kubenswrapper[4810]: I0219 16:07:39.431424 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:39 crc kubenswrapper[4810]: I0219 16:07:39.433942 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:39 crc kubenswrapper[4810]: I0219 16:07:39.505001 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:40 crc kubenswrapper[4810]: I0219 16:07:40.239319 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:44 crc kubenswrapper[4810]: I0219 16:07:44.225221 4810 generic.go:334] "Generic (PLEG): container finished" podID="c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" containerID="0436f9add9060712a8eadd4b9553ee23df2b08b125e5848fa1234b303b4f8c12" exitCode=0 Feb 19 16:07:44 crc kubenswrapper[4810]: I0219 16:07:44.225306 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jljh4" event={"ID":"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd","Type":"ContainerDied","Data":"0436f9add9060712a8eadd4b9553ee23df2b08b125e5848fa1234b303b4f8c12"} Feb 19 16:07:44 crc kubenswrapper[4810]: I0219 16:07:44.232597 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nf72"] Feb 19 16:07:44 crc kubenswrapper[4810]: I0219 16:07:44.232809 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2nf72" podUID="f6de6280-d32b-4c86-b93b-4c0a06ee631a" containerName="registry-server" containerID="cri-o://d7bd439d8bc0d4dd1a6b8ad202c10491abd329175439774b81dd46530d917993" gracePeriod=2 Feb 19 16:07:44 crc kubenswrapper[4810]: I0219 16:07:44.737046 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:44 crc kubenswrapper[4810]: I0219 16:07:44.900424 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6de6280-d32b-4c86-b93b-4c0a06ee631a-utilities\") pod \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\" (UID: \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\") " Feb 19 16:07:44 crc kubenswrapper[4810]: I0219 16:07:44.900546 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6de6280-d32b-4c86-b93b-4c0a06ee631a-catalog-content\") pod \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\" (UID: \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\") " Feb 19 16:07:44 crc kubenswrapper[4810]: I0219 16:07:44.900612 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk984\" (UniqueName: \"kubernetes.io/projected/f6de6280-d32b-4c86-b93b-4c0a06ee631a-kube-api-access-rk984\") pod \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\" (UID: \"f6de6280-d32b-4c86-b93b-4c0a06ee631a\") " Feb 19 16:07:44 crc kubenswrapper[4810]: I0219 16:07:44.904769 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6de6280-d32b-4c86-b93b-4c0a06ee631a-utilities" (OuterVolumeSpecName: "utilities") pod "f6de6280-d32b-4c86-b93b-4c0a06ee631a" (UID: "f6de6280-d32b-4c86-b93b-4c0a06ee631a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:07:44 crc kubenswrapper[4810]: I0219 16:07:44.914078 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6de6280-d32b-4c86-b93b-4c0a06ee631a-kube-api-access-rk984" (OuterVolumeSpecName: "kube-api-access-rk984") pod "f6de6280-d32b-4c86-b93b-4c0a06ee631a" (UID: "f6de6280-d32b-4c86-b93b-4c0a06ee631a"). InnerVolumeSpecName "kube-api-access-rk984". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:07:44 crc kubenswrapper[4810]: I0219 16:07:44.936347 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6de6280-d32b-4c86-b93b-4c0a06ee631a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6de6280-d32b-4c86-b93b-4c0a06ee631a" (UID: "f6de6280-d32b-4c86-b93b-4c0a06ee631a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.003147 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk984\" (UniqueName: \"kubernetes.io/projected/f6de6280-d32b-4c86-b93b-4c0a06ee631a-kube-api-access-rk984\") on node \"crc\" DevicePath \"\"" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.003232 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6de6280-d32b-4c86-b93b-4c0a06ee631a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.003242 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6de6280-d32b-4c86-b93b-4c0a06ee631a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.239828 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jljh4" event={"ID":"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd","Type":"ContainerStarted","Data":"a1b65af471e174bbe1242694aadff9dbcea59298cf527d53fbdffc0d91120e42"} Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.243991 4810 generic.go:334] "Generic (PLEG): container finished" podID="f6de6280-d32b-4c86-b93b-4c0a06ee631a" containerID="d7bd439d8bc0d4dd1a6b8ad202c10491abd329175439774b81dd46530d917993" exitCode=0 Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.244046 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2nf72" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.244045 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nf72" event={"ID":"f6de6280-d32b-4c86-b93b-4c0a06ee631a","Type":"ContainerDied","Data":"d7bd439d8bc0d4dd1a6b8ad202c10491abd329175439774b81dd46530d917993"} Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.244165 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nf72" event={"ID":"f6de6280-d32b-4c86-b93b-4c0a06ee631a","Type":"ContainerDied","Data":"8253aabd5e0eccfe864a20e447838bea8c93e171ca8b4c9d578faed4739fe2a9"} Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.244194 4810 scope.go:117] "RemoveContainer" containerID="d7bd439d8bc0d4dd1a6b8ad202c10491abd329175439774b81dd46530d917993" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.269619 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jljh4" podStartSLOduration=3.535936123 podStartE2EDuration="13.267643211s" podCreationTimestamp="2026-02-19 16:07:32 +0000 UTC" firstStartedPulling="2026-02-19 16:07:35.11257298 +0000 UTC m=+3484.594603104" lastFinishedPulling="2026-02-19 16:07:44.844280058 +0000 UTC m=+3494.326310192" observedRunningTime="2026-02-19 16:07:45.262852522 +0000 UTC m=+3494.744882656" watchObservedRunningTime="2026-02-19 16:07:45.267643211 +0000 UTC m=+3494.749673335" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.295301 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nf72"] Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.302250 4810 scope.go:117] "RemoveContainer" containerID="05cc9a216a60a1d1ff50d8970ba8fe384e3037ab972f04c4b82084af60c6196b" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.311507 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nf72"] Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.608176 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6de6280-d32b-4c86-b93b-4c0a06ee631a" path="/var/lib/kubelet/pods/f6de6280-d32b-4c86-b93b-4c0a06ee631a/volumes" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.693003 4810 scope.go:117] "RemoveContainer" containerID="b46f8abcb3e1715f798a9e99f09aa1eda0b33b52fe5bccf58044aa7a87c06e1c" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.725501 4810 scope.go:117] "RemoveContainer" containerID="d7bd439d8bc0d4dd1a6b8ad202c10491abd329175439774b81dd46530d917993" Feb 19 16:07:45 crc kubenswrapper[4810]: E0219 16:07:45.725975 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7bd439d8bc0d4dd1a6b8ad202c10491abd329175439774b81dd46530d917993\": container with ID starting with d7bd439d8bc0d4dd1a6b8ad202c10491abd329175439774b81dd46530d917993 not found: ID does not exist" containerID="d7bd439d8bc0d4dd1a6b8ad202c10491abd329175439774b81dd46530d917993" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.726026 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7bd439d8bc0d4dd1a6b8ad202c10491abd329175439774b81dd46530d917993"} err="failed to get container status \"d7bd439d8bc0d4dd1a6b8ad202c10491abd329175439774b81dd46530d917993\": rpc error: code = NotFound desc = could not find container \"d7bd439d8bc0d4dd1a6b8ad202c10491abd329175439774b81dd46530d917993\": container with ID starting with d7bd439d8bc0d4dd1a6b8ad202c10491abd329175439774b81dd46530d917993 not found: ID does not exist" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.726058 4810 scope.go:117] "RemoveContainer" containerID="05cc9a216a60a1d1ff50d8970ba8fe384e3037ab972f04c4b82084af60c6196b" Feb 19 16:07:45 crc kubenswrapper[4810]: E0219 16:07:45.726549 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05cc9a216a60a1d1ff50d8970ba8fe384e3037ab972f04c4b82084af60c6196b\": container with ID starting with 05cc9a216a60a1d1ff50d8970ba8fe384e3037ab972f04c4b82084af60c6196b not found: ID does not exist" containerID="05cc9a216a60a1d1ff50d8970ba8fe384e3037ab972f04c4b82084af60c6196b" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.726581 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05cc9a216a60a1d1ff50d8970ba8fe384e3037ab972f04c4b82084af60c6196b"} err="failed to get container status \"05cc9a216a60a1d1ff50d8970ba8fe384e3037ab972f04c4b82084af60c6196b\": rpc error: code = NotFound desc = could not find container \"05cc9a216a60a1d1ff50d8970ba8fe384e3037ab972f04c4b82084af60c6196b\": container with ID starting with 05cc9a216a60a1d1ff50d8970ba8fe384e3037ab972f04c4b82084af60c6196b not found: ID does not exist" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.726602 4810 scope.go:117] "RemoveContainer" containerID="b46f8abcb3e1715f798a9e99f09aa1eda0b33b52fe5bccf58044aa7a87c06e1c" Feb 19 16:07:45 crc kubenswrapper[4810]: E0219 16:07:45.726885 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b46f8abcb3e1715f798a9e99f09aa1eda0b33b52fe5bccf58044aa7a87c06e1c\": container with ID starting with b46f8abcb3e1715f798a9e99f09aa1eda0b33b52fe5bccf58044aa7a87c06e1c not found: ID does not exist" containerID="b46f8abcb3e1715f798a9e99f09aa1eda0b33b52fe5bccf58044aa7a87c06e1c" Feb 19 16:07:45 crc kubenswrapper[4810]: I0219 16:07:45.726913 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b46f8abcb3e1715f798a9e99f09aa1eda0b33b52fe5bccf58044aa7a87c06e1c"} err="failed to get container status \"b46f8abcb3e1715f798a9e99f09aa1eda0b33b52fe5bccf58044aa7a87c06e1c\": rpc error: code = NotFound desc = could not find container \"b46f8abcb3e1715f798a9e99f09aa1eda0b33b52fe5bccf58044aa7a87c06e1c\": container with ID starting with b46f8abcb3e1715f798a9e99f09aa1eda0b33b52fe5bccf58044aa7a87c06e1c not found: ID does not exist" Feb 19 16:07:52 crc kubenswrapper[4810]: I0219 16:07:52.790695 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:07:52 crc kubenswrapper[4810]: I0219 16:07:52.791493 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:07:53 crc kubenswrapper[4810]: I0219 16:07:53.866688 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jljh4" podUID="c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" containerName="registry-server" probeResult="failure" output=< Feb 19 16:07:53 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 16:07:53 crc kubenswrapper[4810]: > Feb 19 16:07:56 crc kubenswrapper[4810]: I0219 16:07:56.404234 4810 generic.go:334] "Generic (PLEG): container finished" podID="577659f8-9dbb-46d0-b2cb-80951550957f" containerID="b8577964a2dfe54d0a59674fdaf01d96e87490efbc3e6f7eab9d40ccbe5b4994" exitCode=0 Feb 19 16:07:56 crc kubenswrapper[4810]: I0219 16:07:56.405043 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64j86" event={"ID":"577659f8-9dbb-46d0-b2cb-80951550957f","Type":"ContainerDied","Data":"b8577964a2dfe54d0a59674fdaf01d96e87490efbc3e6f7eab9d40ccbe5b4994"} Feb 19 16:07:58 crc kubenswrapper[4810]: I0219 16:07:58.429417 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64j86" event={"ID":"577659f8-9dbb-46d0-b2cb-80951550957f","Type":"ContainerStarted","Data":"129527e774d59e3246c9d516c24ebaf0a21d8e8e0c1747f09b62f3640f4d8e2d"} Feb 19 16:07:58 crc kubenswrapper[4810]: I0219 16:07:58.454880 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-64j86" podStartSLOduration=4.307370601 podStartE2EDuration="26.454858511s" podCreationTimestamp="2026-02-19 16:07:32 +0000 UTC" firstStartedPulling="2026-02-19 16:07:35.107802831 +0000 UTC m=+3484.589832965" lastFinishedPulling="2026-02-19 16:07:57.255290741 +0000 UTC m=+3506.737320875" observedRunningTime="2026-02-19 16:07:58.452542083 +0000 UTC m=+3507.934572207" watchObservedRunningTime="2026-02-19 16:07:58.454858511 +0000 UTC m=+3507.936888655" Feb 19 16:08:02 crc kubenswrapper[4810]: I0219 16:08:02.850902 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:08:02 crc kubenswrapper[4810]: I0219 16:08:02.928431 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:08:03 crc kubenswrapper[4810]: I0219 16:08:03.283745 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:08:03 crc kubenswrapper[4810]: I0219 16:08:03.284732 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:08:03 crc kubenswrapper[4810]: I0219 16:08:03.665608 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jljh4"] Feb 19 16:08:04 crc kubenswrapper[4810]: I0219 16:08:04.337520 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-64j86" podUID="577659f8-9dbb-46d0-b2cb-80951550957f" containerName="registry-server" probeResult="failure" output=< Feb 19 16:08:04 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 16:08:04 crc kubenswrapper[4810]: > Feb 19 16:08:04 crc kubenswrapper[4810]: I0219 16:08:04.494619 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jljh4" podUID="c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" containerName="registry-server" containerID="cri-o://a1b65af471e174bbe1242694aadff9dbcea59298cf527d53fbdffc0d91120e42" gracePeriod=2 Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.478978 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.505779 4810 generic.go:334] "Generic (PLEG): container finished" podID="c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" containerID="a1b65af471e174bbe1242694aadff9dbcea59298cf527d53fbdffc0d91120e42" exitCode=0 Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.505828 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jljh4" event={"ID":"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd","Type":"ContainerDied","Data":"a1b65af471e174bbe1242694aadff9dbcea59298cf527d53fbdffc0d91120e42"} Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.505856 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jljh4" event={"ID":"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd","Type":"ContainerDied","Data":"3f2a36ddbb2854ba8d1bc8b3161a45126a4855b2d5a76816b0ab55f69aa7490f"} Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.505875 4810 scope.go:117] "RemoveContainer" containerID="a1b65af471e174bbe1242694aadff9dbcea59298cf527d53fbdffc0d91120e42" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.506020 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jljh4" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.574295 4810 scope.go:117] "RemoveContainer" containerID="0436f9add9060712a8eadd4b9553ee23df2b08b125e5848fa1234b303b4f8c12" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.651979 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-catalog-content\") pod \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\" (UID: \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\") " Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.652417 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jss82\" (UniqueName: \"kubernetes.io/projected/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-kube-api-access-jss82\") pod \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\" (UID: \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\") " Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.652502 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-utilities\") pod \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\" (UID: \"c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd\") " Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.654119 4810 scope.go:117] "RemoveContainer" containerID="6fb2edb1a24f496d9ddd2804e3d51a61772791f54decbda879e1317be0fa6cc6" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.654238 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-utilities" (OuterVolumeSpecName: "utilities") pod "c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" (UID: "c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.698520 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-kube-api-access-jss82" (OuterVolumeSpecName: "kube-api-access-jss82") pod "c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" (UID: "c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd"). InnerVolumeSpecName "kube-api-access-jss82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.755056 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.755090 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jss82\" (UniqueName: \"kubernetes.io/projected/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-kube-api-access-jss82\") on node \"crc\" DevicePath \"\"" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.761366 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" (UID: "c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.768759 4810 scope.go:117] "RemoveContainer" containerID="a1b65af471e174bbe1242694aadff9dbcea59298cf527d53fbdffc0d91120e42" Feb 19 16:08:05 crc kubenswrapper[4810]: E0219 16:08:05.769468 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b65af471e174bbe1242694aadff9dbcea59298cf527d53fbdffc0d91120e42\": container with ID starting with a1b65af471e174bbe1242694aadff9dbcea59298cf527d53fbdffc0d91120e42 not found: ID does not exist" containerID="a1b65af471e174bbe1242694aadff9dbcea59298cf527d53fbdffc0d91120e42" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.769527 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b65af471e174bbe1242694aadff9dbcea59298cf527d53fbdffc0d91120e42"} err="failed to get container status \"a1b65af471e174bbe1242694aadff9dbcea59298cf527d53fbdffc0d91120e42\": rpc error: code = NotFound desc = could not find container \"a1b65af471e174bbe1242694aadff9dbcea59298cf527d53fbdffc0d91120e42\": container with ID starting with a1b65af471e174bbe1242694aadff9dbcea59298cf527d53fbdffc0d91120e42 not found: ID does not exist" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.769556 4810 scope.go:117] "RemoveContainer" containerID="0436f9add9060712a8eadd4b9553ee23df2b08b125e5848fa1234b303b4f8c12" Feb 19 16:08:05 crc kubenswrapper[4810]: E0219 16:08:05.769865 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0436f9add9060712a8eadd4b9553ee23df2b08b125e5848fa1234b303b4f8c12\": container with ID starting with 0436f9add9060712a8eadd4b9553ee23df2b08b125e5848fa1234b303b4f8c12 not found: ID does not exist" containerID="0436f9add9060712a8eadd4b9553ee23df2b08b125e5848fa1234b303b4f8c12" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.769910 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0436f9add9060712a8eadd4b9553ee23df2b08b125e5848fa1234b303b4f8c12"} err="failed to get container status \"0436f9add9060712a8eadd4b9553ee23df2b08b125e5848fa1234b303b4f8c12\": rpc error: code = NotFound desc = could not find container \"0436f9add9060712a8eadd4b9553ee23df2b08b125e5848fa1234b303b4f8c12\": container with ID starting with 0436f9add9060712a8eadd4b9553ee23df2b08b125e5848fa1234b303b4f8c12 not found: ID does not exist" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.769938 4810 scope.go:117] "RemoveContainer" containerID="6fb2edb1a24f496d9ddd2804e3d51a61772791f54decbda879e1317be0fa6cc6" Feb 19 16:08:05 crc kubenswrapper[4810]: E0219 16:08:05.770259 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fb2edb1a24f496d9ddd2804e3d51a61772791f54decbda879e1317be0fa6cc6\": container with ID starting with 6fb2edb1a24f496d9ddd2804e3d51a61772791f54decbda879e1317be0fa6cc6 not found: ID does not exist" containerID="6fb2edb1a24f496d9ddd2804e3d51a61772791f54decbda879e1317be0fa6cc6" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.770290 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fb2edb1a24f496d9ddd2804e3d51a61772791f54decbda879e1317be0fa6cc6"} err="failed to get container status \"6fb2edb1a24f496d9ddd2804e3d51a61772791f54decbda879e1317be0fa6cc6\": rpc error: code = NotFound desc = could not find container \"6fb2edb1a24f496d9ddd2804e3d51a61772791f54decbda879e1317be0fa6cc6\": container with ID starting with 6fb2edb1a24f496d9ddd2804e3d51a61772791f54decbda879e1317be0fa6cc6 not found: ID does not exist" Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.840528 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jljh4"] Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.850784 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jljh4"] Feb 19 16:08:05 crc kubenswrapper[4810]: I0219 16:08:05.857046 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:08:07 crc kubenswrapper[4810]: I0219 16:08:07.456123 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" path="/var/lib/kubelet/pods/c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd/volumes" Feb 19 16:08:14 crc kubenswrapper[4810]: I0219 16:08:14.350358 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-64j86" podUID="577659f8-9dbb-46d0-b2cb-80951550957f" containerName="registry-server" probeResult="failure" output=< Feb 19 16:08:14 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 16:08:14 crc kubenswrapper[4810]: > Feb 19 16:08:23 crc kubenswrapper[4810]: I0219 16:08:23.368763 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:08:23 crc kubenswrapper[4810]: I0219 16:08:23.411945 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.013042 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-64j86"] Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.013757 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-64j86" podUID="577659f8-9dbb-46d0-b2cb-80951550957f" containerName="registry-server" containerID="cri-o://129527e774d59e3246c9d516c24ebaf0a21d8e8e0c1747f09b62f3640f4d8e2d" gracePeriod=2 Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.516449 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.544302 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577659f8-9dbb-46d0-b2cb-80951550957f-catalog-content\") pod \"577659f8-9dbb-46d0-b2cb-80951550957f\" (UID: \"577659f8-9dbb-46d0-b2cb-80951550957f\") " Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.544747 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w7q7\" (UniqueName: \"kubernetes.io/projected/577659f8-9dbb-46d0-b2cb-80951550957f-kube-api-access-4w7q7\") pod \"577659f8-9dbb-46d0-b2cb-80951550957f\" (UID: \"577659f8-9dbb-46d0-b2cb-80951550957f\") " Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.544784 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577659f8-9dbb-46d0-b2cb-80951550957f-utilities\") pod \"577659f8-9dbb-46d0-b2cb-80951550957f\" (UID: \"577659f8-9dbb-46d0-b2cb-80951550957f\") " Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.545693 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/577659f8-9dbb-46d0-b2cb-80951550957f-utilities" (OuterVolumeSpecName: "utilities") pod "577659f8-9dbb-46d0-b2cb-80951550957f" (UID: "577659f8-9dbb-46d0-b2cb-80951550957f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.566955 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/577659f8-9dbb-46d0-b2cb-80951550957f-kube-api-access-4w7q7" (OuterVolumeSpecName: "kube-api-access-4w7q7") pod "577659f8-9dbb-46d0-b2cb-80951550957f" (UID: "577659f8-9dbb-46d0-b2cb-80951550957f"). InnerVolumeSpecName "kube-api-access-4w7q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.651283 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w7q7\" (UniqueName: \"kubernetes.io/projected/577659f8-9dbb-46d0-b2cb-80951550957f-kube-api-access-4w7q7\") on node \"crc\" DevicePath \"\"" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.651343 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577659f8-9dbb-46d0-b2cb-80951550957f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.691825 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/577659f8-9dbb-46d0-b2cb-80951550957f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "577659f8-9dbb-46d0-b2cb-80951550957f" (UID: "577659f8-9dbb-46d0-b2cb-80951550957f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.752081 4810 generic.go:334] "Generic (PLEG): container finished" podID="577659f8-9dbb-46d0-b2cb-80951550957f" containerID="129527e774d59e3246c9d516c24ebaf0a21d8e8e0c1747f09b62f3640f4d8e2d" exitCode=0 Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.752159 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64j86" event={"ID":"577659f8-9dbb-46d0-b2cb-80951550957f","Type":"ContainerDied","Data":"129527e774d59e3246c9d516c24ebaf0a21d8e8e0c1747f09b62f3640f4d8e2d"} Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.752205 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64j86" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.752243 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64j86" event={"ID":"577659f8-9dbb-46d0-b2cb-80951550957f","Type":"ContainerDied","Data":"96a616bff386690da5f589c057f363366e36ad19ad36fdff70cbdf3bb4704f10"} Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.752278 4810 scope.go:117] "RemoveContainer" containerID="129527e774d59e3246c9d516c24ebaf0a21d8e8e0c1747f09b62f3640f4d8e2d" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.753552 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577659f8-9dbb-46d0-b2cb-80951550957f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.794016 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-64j86"] Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.797731 4810 scope.go:117] "RemoveContainer" containerID="b8577964a2dfe54d0a59674fdaf01d96e87490efbc3e6f7eab9d40ccbe5b4994" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.803852 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-64j86"] Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.831829 4810 scope.go:117] "RemoveContainer" containerID="0a997d370511dea75f129a7681651a204854d63f35705e130fcf606397cfec6f" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.909833 4810 scope.go:117] "RemoveContainer" containerID="129527e774d59e3246c9d516c24ebaf0a21d8e8e0c1747f09b62f3640f4d8e2d" Feb 19 16:08:26 crc kubenswrapper[4810]: E0219 16:08:26.910356 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"129527e774d59e3246c9d516c24ebaf0a21d8e8e0c1747f09b62f3640f4d8e2d\": container with ID starting with 129527e774d59e3246c9d516c24ebaf0a21d8e8e0c1747f09b62f3640f4d8e2d not found: ID does not exist" containerID="129527e774d59e3246c9d516c24ebaf0a21d8e8e0c1747f09b62f3640f4d8e2d" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.910395 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"129527e774d59e3246c9d516c24ebaf0a21d8e8e0c1747f09b62f3640f4d8e2d"} err="failed to get container status \"129527e774d59e3246c9d516c24ebaf0a21d8e8e0c1747f09b62f3640f4d8e2d\": rpc error: code = NotFound desc = could not find container \"129527e774d59e3246c9d516c24ebaf0a21d8e8e0c1747f09b62f3640f4d8e2d\": container with ID starting with 129527e774d59e3246c9d516c24ebaf0a21d8e8e0c1747f09b62f3640f4d8e2d not found: ID does not exist" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.910425 4810 scope.go:117] "RemoveContainer" containerID="b8577964a2dfe54d0a59674fdaf01d96e87490efbc3e6f7eab9d40ccbe5b4994" Feb 19 16:08:26 crc kubenswrapper[4810]: E0219 16:08:26.911012 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8577964a2dfe54d0a59674fdaf01d96e87490efbc3e6f7eab9d40ccbe5b4994\": container with ID starting with b8577964a2dfe54d0a59674fdaf01d96e87490efbc3e6f7eab9d40ccbe5b4994 not found: ID does not exist" containerID="b8577964a2dfe54d0a59674fdaf01d96e87490efbc3e6f7eab9d40ccbe5b4994" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.911085 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8577964a2dfe54d0a59674fdaf01d96e87490efbc3e6f7eab9d40ccbe5b4994"} err="failed to get container status \"b8577964a2dfe54d0a59674fdaf01d96e87490efbc3e6f7eab9d40ccbe5b4994\": rpc error: code = NotFound desc = could not find container \"b8577964a2dfe54d0a59674fdaf01d96e87490efbc3e6f7eab9d40ccbe5b4994\": container with ID starting with b8577964a2dfe54d0a59674fdaf01d96e87490efbc3e6f7eab9d40ccbe5b4994 not found: ID does not exist" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.911122 4810 scope.go:117] "RemoveContainer" containerID="0a997d370511dea75f129a7681651a204854d63f35705e130fcf606397cfec6f" Feb 19 16:08:26 crc kubenswrapper[4810]: E0219 16:08:26.911721 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a997d370511dea75f129a7681651a204854d63f35705e130fcf606397cfec6f\": container with ID starting with 0a997d370511dea75f129a7681651a204854d63f35705e130fcf606397cfec6f not found: ID does not exist" containerID="0a997d370511dea75f129a7681651a204854d63f35705e130fcf606397cfec6f" Feb 19 16:08:26 crc kubenswrapper[4810]: I0219 16:08:26.911773 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a997d370511dea75f129a7681651a204854d63f35705e130fcf606397cfec6f"} err="failed to get container status \"0a997d370511dea75f129a7681651a204854d63f35705e130fcf606397cfec6f\": rpc error: code = NotFound desc = could not find container \"0a997d370511dea75f129a7681651a204854d63f35705e130fcf606397cfec6f\": container with ID starting with 0a997d370511dea75f129a7681651a204854d63f35705e130fcf606397cfec6f not found: ID does not exist" Feb 19 16:08:27 crc kubenswrapper[4810]: I0219 16:08:27.460755 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="577659f8-9dbb-46d0-b2cb-80951550957f" path="/var/lib/kubelet/pods/577659f8-9dbb-46d0-b2cb-80951550957f/volumes" Feb 19 16:09:19 crc kubenswrapper[4810]: I0219 16:09:19.537226 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:09:19 crc kubenswrapper[4810]: I0219 16:09:19.537936 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:09:49 crc kubenswrapper[4810]: I0219 16:09:49.537570 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:09:49 crc kubenswrapper[4810]: I0219 16:09:49.538186 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:10:19 crc kubenswrapper[4810]: I0219 16:10:19.537435 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:10:19 crc kubenswrapper[4810]: I0219 16:10:19.538047 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:10:19 crc kubenswrapper[4810]: I0219 16:10:19.538105 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 16:10:19 crc kubenswrapper[4810]: I0219 16:10:19.539077 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 16:10:19 crc kubenswrapper[4810]: I0219 16:10:19.539173 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" gracePeriod=600 Feb 19 16:10:19 crc kubenswrapper[4810]: E0219 16:10:19.674192 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:10:20 crc kubenswrapper[4810]: I0219 16:10:20.304234 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" exitCode=0 Feb 19 16:10:20 crc kubenswrapper[4810]: I0219 16:10:20.304309 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e"} Feb 19 16:10:20 crc kubenswrapper[4810]: I0219 16:10:20.304608 4810 scope.go:117] "RemoveContainer" containerID="2578d4284c44e38a7496c83ee59e7a00d386a9b2aebd20063f610b39b9a8d15a" Feb 19 16:10:20 crc kubenswrapper[4810]: I0219 16:10:20.305268 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:10:20 crc kubenswrapper[4810]: E0219 16:10:20.305543 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:10:34 crc kubenswrapper[4810]: I0219 16:10:34.440469 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:10:34 crc kubenswrapper[4810]: E0219 16:10:34.441252 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:10:48 crc kubenswrapper[4810]: I0219 16:10:48.439451 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:10:48 crc kubenswrapper[4810]: E0219 16:10:48.440131 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:11:02 crc kubenswrapper[4810]: I0219 16:11:02.439578 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:11:02 crc kubenswrapper[4810]: E0219 16:11:02.440744 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:11:16 crc kubenswrapper[4810]: I0219 16:11:16.439788 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:11:16 crc kubenswrapper[4810]: E0219 16:11:16.440684 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:11:29 crc kubenswrapper[4810]: I0219 16:11:29.439996 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:11:29 crc kubenswrapper[4810]: E0219 16:11:29.440990 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:11:44 crc kubenswrapper[4810]: I0219 16:11:44.439645 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:11:44 crc kubenswrapper[4810]: E0219 16:11:44.440539 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:11:55 crc kubenswrapper[4810]: I0219 16:11:55.439385 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:11:55 crc kubenswrapper[4810]: E0219 16:11:55.440380 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:12:09 crc kubenswrapper[4810]: I0219 16:12:09.439424 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:12:09 crc kubenswrapper[4810]: E0219 16:12:09.440348 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:12:24 crc kubenswrapper[4810]: I0219 16:12:24.439438 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:12:24 crc kubenswrapper[4810]: E0219 16:12:24.440207 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:12:36 crc kubenswrapper[4810]: I0219 16:12:36.439837 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:12:36 crc kubenswrapper[4810]: E0219 16:12:36.440923 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:12:47 crc kubenswrapper[4810]: I0219 16:12:47.440066 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:12:47 crc kubenswrapper[4810]: E0219 16:12:47.441218 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:12:58 crc kubenswrapper[4810]: I0219 16:12:58.440212 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:12:58 crc kubenswrapper[4810]: E0219 16:12:58.441417 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:13:11 crc kubenswrapper[4810]: I0219 16:13:11.447892 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:13:11 crc kubenswrapper[4810]: E0219 16:13:11.448641 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:13:26 crc kubenswrapper[4810]: I0219 16:13:26.440013 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:13:26 crc kubenswrapper[4810]: E0219 16:13:26.440902 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:13:40 crc kubenswrapper[4810]: I0219 16:13:40.440052 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:13:40 crc kubenswrapper[4810]: E0219 16:13:40.440976 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:13:54 crc kubenswrapper[4810]: I0219 16:13:54.440201 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:13:54 crc kubenswrapper[4810]: E0219 16:13:54.441554 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:14:09 crc kubenswrapper[4810]: I0219 16:14:09.440172 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:14:09 crc kubenswrapper[4810]: E0219 16:14:09.441736 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:14:23 crc kubenswrapper[4810]: I0219 16:14:23.439760 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:14:23 crc kubenswrapper[4810]: E0219 16:14:23.440853 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:14:36 crc kubenswrapper[4810]: I0219 16:14:36.450540 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:14:36 crc kubenswrapper[4810]: E0219 16:14:36.451582 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:14:49 crc kubenswrapper[4810]: I0219 16:14:49.439443 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:14:49 crc kubenswrapper[4810]: E0219 16:14:49.440031 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.222582 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq"] Feb 19 16:15:00 crc kubenswrapper[4810]: E0219 16:15:00.223864 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" containerName="registry-server" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.223879 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" containerName="registry-server" Feb 19 16:15:00 crc kubenswrapper[4810]: E0219 16:15:00.223909 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577659f8-9dbb-46d0-b2cb-80951550957f" containerName="extract-utilities" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.223915 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="577659f8-9dbb-46d0-b2cb-80951550957f" containerName="extract-utilities" Feb 19 16:15:00 crc kubenswrapper[4810]: E0219 16:15:00.223930 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6de6280-d32b-4c86-b93b-4c0a06ee631a" containerName="extract-content" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.223936 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6de6280-d32b-4c86-b93b-4c0a06ee631a" containerName="extract-content" Feb 19 16:15:00 crc kubenswrapper[4810]: E0219 16:15:00.223949 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" containerName="extract-content" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.223955 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" containerName="extract-content" Feb 19 16:15:00 crc kubenswrapper[4810]: E0219 16:15:00.223973 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6de6280-d32b-4c86-b93b-4c0a06ee631a" containerName="extract-utilities" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.223981 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6de6280-d32b-4c86-b93b-4c0a06ee631a" containerName="extract-utilities" Feb 19 16:15:00 crc kubenswrapper[4810]: E0219 16:15:00.224008 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577659f8-9dbb-46d0-b2cb-80951550957f" containerName="extract-content" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.224014 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="577659f8-9dbb-46d0-b2cb-80951550957f" containerName="extract-content" Feb 19 16:15:00 crc kubenswrapper[4810]: E0219 16:15:00.224026 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577659f8-9dbb-46d0-b2cb-80951550957f" containerName="registry-server" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.224032 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="577659f8-9dbb-46d0-b2cb-80951550957f" containerName="registry-server" Feb 19 16:15:00 crc kubenswrapper[4810]: E0219 16:15:00.224048 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" containerName="extract-utilities" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.224054 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" containerName="extract-utilities" Feb 19 16:15:00 crc kubenswrapper[4810]: E0219 16:15:00.224078 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6de6280-d32b-4c86-b93b-4c0a06ee631a" containerName="registry-server" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.224084 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6de6280-d32b-4c86-b93b-4c0a06ee631a" containerName="registry-server" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.224446 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d8a679-ba7e-40bb-a22e-a1ae751fe6bd" containerName="registry-server" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.224467 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="577659f8-9dbb-46d0-b2cb-80951550957f" containerName="registry-server" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.224491 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6de6280-d32b-4c86-b93b-4c0a06ee631a" containerName="registry-server" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.225396 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.228425 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.230932 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.234966 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq"] Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.278812 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99fb536c-bd62-47d3-87d6-9f56d3e51f72-config-volume\") pod \"collect-profiles-29525295-b9xtq\" (UID: \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.279149 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j28zk\" (UniqueName: \"kubernetes.io/projected/99fb536c-bd62-47d3-87d6-9f56d3e51f72-kube-api-access-j28zk\") pod \"collect-profiles-29525295-b9xtq\" (UID: \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.279220 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99fb536c-bd62-47d3-87d6-9f56d3e51f72-secret-volume\") pod \"collect-profiles-29525295-b9xtq\" (UID: \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.380971 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j28zk\" (UniqueName: \"kubernetes.io/projected/99fb536c-bd62-47d3-87d6-9f56d3e51f72-kube-api-access-j28zk\") pod \"collect-profiles-29525295-b9xtq\" (UID: \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.381378 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99fb536c-bd62-47d3-87d6-9f56d3e51f72-secret-volume\") pod \"collect-profiles-29525295-b9xtq\" (UID: \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.381469 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99fb536c-bd62-47d3-87d6-9f56d3e51f72-config-volume\") pod \"collect-profiles-29525295-b9xtq\" (UID: \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.383314 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99fb536c-bd62-47d3-87d6-9f56d3e51f72-config-volume\") pod \"collect-profiles-29525295-b9xtq\" (UID: \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.387654 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99fb536c-bd62-47d3-87d6-9f56d3e51f72-secret-volume\") pod \"collect-profiles-29525295-b9xtq\" (UID: \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.402026 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j28zk\" (UniqueName: \"kubernetes.io/projected/99fb536c-bd62-47d3-87d6-9f56d3e51f72-kube-api-access-j28zk\") pod \"collect-profiles-29525295-b9xtq\" (UID: \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" Feb 19 16:15:00 crc kubenswrapper[4810]: I0219 16:15:00.564802 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" Feb 19 16:15:01 crc kubenswrapper[4810]: I0219 16:15:01.071290 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq"] Feb 19 16:15:01 crc kubenswrapper[4810]: I0219 16:15:01.274814 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" event={"ID":"99fb536c-bd62-47d3-87d6-9f56d3e51f72","Type":"ContainerStarted","Data":"932479730c0a6c125151920519504a29cae36b5a433b12d52524833608c85c05"} Feb 19 16:15:01 crc kubenswrapper[4810]: I0219 16:15:01.276101 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" event={"ID":"99fb536c-bd62-47d3-87d6-9f56d3e51f72","Type":"ContainerStarted","Data":"c36e6a7677ac2c7f43e439596cd29f482f6c5361556fbcc3e64fe5eceac4059c"} Feb 19 16:15:01 crc kubenswrapper[4810]: I0219 16:15:01.294537 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" podStartSLOduration=1.294522117 podStartE2EDuration="1.294522117s" podCreationTimestamp="2026-02-19 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 16:15:01.291753948 +0000 UTC m=+3930.773784072" watchObservedRunningTime="2026-02-19 16:15:01.294522117 +0000 UTC m=+3930.776552241" Feb 19 16:15:02 crc kubenswrapper[4810]: I0219 16:15:02.287128 4810 generic.go:334] "Generic (PLEG): container finished" podID="99fb536c-bd62-47d3-87d6-9f56d3e51f72" containerID="932479730c0a6c125151920519504a29cae36b5a433b12d52524833608c85c05" exitCode=0 Feb 19 16:15:02 crc kubenswrapper[4810]: I0219 16:15:02.287174 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" event={"ID":"99fb536c-bd62-47d3-87d6-9f56d3e51f72","Type":"ContainerDied","Data":"932479730c0a6c125151920519504a29cae36b5a433b12d52524833608c85c05"} Feb 19 16:15:03 crc kubenswrapper[4810]: I0219 16:15:03.754991 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" Feb 19 16:15:03 crc kubenswrapper[4810]: I0219 16:15:03.853014 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99fb536c-bd62-47d3-87d6-9f56d3e51f72-secret-volume\") pod \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\" (UID: \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\") " Feb 19 16:15:03 crc kubenswrapper[4810]: I0219 16:15:03.853508 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99fb536c-bd62-47d3-87d6-9f56d3e51f72-config-volume\") pod \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\" (UID: \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\") " Feb 19 16:15:03 crc kubenswrapper[4810]: I0219 16:15:03.853563 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j28zk\" (UniqueName: \"kubernetes.io/projected/99fb536c-bd62-47d3-87d6-9f56d3e51f72-kube-api-access-j28zk\") pod \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\" (UID: \"99fb536c-bd62-47d3-87d6-9f56d3e51f72\") " Feb 19 16:15:03 crc kubenswrapper[4810]: I0219 16:15:03.854361 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99fb536c-bd62-47d3-87d6-9f56d3e51f72-config-volume" (OuterVolumeSpecName: "config-volume") pod "99fb536c-bd62-47d3-87d6-9f56d3e51f72" (UID: "99fb536c-bd62-47d3-87d6-9f56d3e51f72"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 16:15:03 crc kubenswrapper[4810]: I0219 16:15:03.872665 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99fb536c-bd62-47d3-87d6-9f56d3e51f72-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "99fb536c-bd62-47d3-87d6-9f56d3e51f72" (UID: "99fb536c-bd62-47d3-87d6-9f56d3e51f72"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:15:03 crc kubenswrapper[4810]: I0219 16:15:03.882028 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99fb536c-bd62-47d3-87d6-9f56d3e51f72-kube-api-access-j28zk" (OuterVolumeSpecName: "kube-api-access-j28zk") pod "99fb536c-bd62-47d3-87d6-9f56d3e51f72" (UID: "99fb536c-bd62-47d3-87d6-9f56d3e51f72"). InnerVolumeSpecName "kube-api-access-j28zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:15:03 crc kubenswrapper[4810]: I0219 16:15:03.954701 4810 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99fb536c-bd62-47d3-87d6-9f56d3e51f72-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 16:15:03 crc kubenswrapper[4810]: I0219 16:15:03.954723 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99fb536c-bd62-47d3-87d6-9f56d3e51f72-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 16:15:03 crc kubenswrapper[4810]: I0219 16:15:03.954732 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j28zk\" (UniqueName: \"kubernetes.io/projected/99fb536c-bd62-47d3-87d6-9f56d3e51f72-kube-api-access-j28zk\") on node \"crc\" DevicePath \"\"" Feb 19 16:15:04 crc kubenswrapper[4810]: I0219 16:15:04.307209 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" event={"ID":"99fb536c-bd62-47d3-87d6-9f56d3e51f72","Type":"ContainerDied","Data":"c36e6a7677ac2c7f43e439596cd29f482f6c5361556fbcc3e64fe5eceac4059c"} Feb 19 16:15:04 crc kubenswrapper[4810]: I0219 16:15:04.307258 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c36e6a7677ac2c7f43e439596cd29f482f6c5361556fbcc3e64fe5eceac4059c" Feb 19 16:15:04 crc kubenswrapper[4810]: I0219 16:15:04.307274 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq" Feb 19 16:15:04 crc kubenswrapper[4810]: I0219 16:15:04.375521 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78"] Feb 19 16:15:04 crc kubenswrapper[4810]: I0219 16:15:04.384237 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525250-rls78"] Feb 19 16:15:04 crc kubenswrapper[4810]: I0219 16:15:04.440070 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:15:04 crc kubenswrapper[4810]: E0219 16:15:04.440393 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:15:05 crc kubenswrapper[4810]: I0219 16:15:05.461514 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40500a46-a16b-4282-86e4-1d99277d7c7a" path="/var/lib/kubelet/pods/40500a46-a16b-4282-86e4-1d99277d7c7a/volumes" Feb 19 16:15:16 crc kubenswrapper[4810]: I0219 16:15:16.159830 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-78bc5d479f-k79xx" podUID="9190a865-226b-487c-b0f9-2573f50f0eab" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 19 16:15:19 crc kubenswrapper[4810]: I0219 16:15:19.439951 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:15:19 crc kubenswrapper[4810]: E0219 16:15:19.440731 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:15:30 crc kubenswrapper[4810]: I0219 16:15:30.440382 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:15:31 crc kubenswrapper[4810]: I0219 16:15:31.612456 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"dd8031bab4ace844a0a95e2a4ed00eb2a49420bcc364c28753fbe8146e7a2fd5"} Feb 19 16:15:43 crc kubenswrapper[4810]: I0219 16:15:43.900654 4810 scope.go:117] "RemoveContainer" containerID="b50c5c7b6b7301c6c8992f22517d1d5b4a8f3065b75aef28f8faaa61ad94fd27" Feb 19 16:17:07 crc kubenswrapper[4810]: I0219 16:17:07.902046 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kzwcn"] Feb 19 16:17:07 crc kubenswrapper[4810]: E0219 16:17:07.903080 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99fb536c-bd62-47d3-87d6-9f56d3e51f72" containerName="collect-profiles" Feb 19 16:17:07 crc kubenswrapper[4810]: I0219 16:17:07.903096 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="99fb536c-bd62-47d3-87d6-9f56d3e51f72" containerName="collect-profiles" Feb 19 16:17:07 crc kubenswrapper[4810]: I0219 16:17:07.903334 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="99fb536c-bd62-47d3-87d6-9f56d3e51f72" containerName="collect-profiles" Feb 19 16:17:07 crc kubenswrapper[4810]: I0219 16:17:07.905068 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:08 crc kubenswrapper[4810]: I0219 16:17:08.002152 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kzwcn"] Feb 19 16:17:08 crc kubenswrapper[4810]: I0219 16:17:08.022424 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xbc9\" (UniqueName: \"kubernetes.io/projected/29a395e7-cdab-40db-ae2c-1d746a31aeec-kube-api-access-4xbc9\") pod \"community-operators-kzwcn\" (UID: \"29a395e7-cdab-40db-ae2c-1d746a31aeec\") " pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:08 crc kubenswrapper[4810]: I0219 16:17:08.022503 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a395e7-cdab-40db-ae2c-1d746a31aeec-utilities\") pod \"community-operators-kzwcn\" (UID: \"29a395e7-cdab-40db-ae2c-1d746a31aeec\") " pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:08 crc kubenswrapper[4810]: I0219 16:17:08.022603 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a395e7-cdab-40db-ae2c-1d746a31aeec-catalog-content\") pod \"community-operators-kzwcn\" (UID: \"29a395e7-cdab-40db-ae2c-1d746a31aeec\") " pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:08 crc kubenswrapper[4810]: I0219 16:17:08.124686 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a395e7-cdab-40db-ae2c-1d746a31aeec-utilities\") pod \"community-operators-kzwcn\" (UID: \"29a395e7-cdab-40db-ae2c-1d746a31aeec\") " pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:08 crc kubenswrapper[4810]: I0219 16:17:08.124886 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a395e7-cdab-40db-ae2c-1d746a31aeec-catalog-content\") pod \"community-operators-kzwcn\" (UID: \"29a395e7-cdab-40db-ae2c-1d746a31aeec\") " pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:08 crc kubenswrapper[4810]: I0219 16:17:08.124983 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xbc9\" (UniqueName: \"kubernetes.io/projected/29a395e7-cdab-40db-ae2c-1d746a31aeec-kube-api-access-4xbc9\") pod \"community-operators-kzwcn\" (UID: \"29a395e7-cdab-40db-ae2c-1d746a31aeec\") " pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:08 crc kubenswrapper[4810]: I0219 16:17:08.125163 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a395e7-cdab-40db-ae2c-1d746a31aeec-utilities\") pod \"community-operators-kzwcn\" (UID: \"29a395e7-cdab-40db-ae2c-1d746a31aeec\") " pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:08 crc kubenswrapper[4810]: I0219 16:17:08.125424 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a395e7-cdab-40db-ae2c-1d746a31aeec-catalog-content\") pod \"community-operators-kzwcn\" (UID: \"29a395e7-cdab-40db-ae2c-1d746a31aeec\") " pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:08 crc kubenswrapper[4810]: I0219 16:17:08.156588 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xbc9\" (UniqueName: \"kubernetes.io/projected/29a395e7-cdab-40db-ae2c-1d746a31aeec-kube-api-access-4xbc9\") pod \"community-operators-kzwcn\" (UID: \"29a395e7-cdab-40db-ae2c-1d746a31aeec\") " pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:08 crc kubenswrapper[4810]: I0219 16:17:08.228111 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:08 crc kubenswrapper[4810]: I0219 16:17:08.762003 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kzwcn"] Feb 19 16:17:08 crc kubenswrapper[4810]: I0219 16:17:08.822970 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzwcn" event={"ID":"29a395e7-cdab-40db-ae2c-1d746a31aeec","Type":"ContainerStarted","Data":"d2ce1af4e07d6583d631ee1c7d3aaab708cde8706d5712cba248062381b3897d"} Feb 19 16:17:09 crc kubenswrapper[4810]: I0219 16:17:09.836290 4810 generic.go:334] "Generic (PLEG): container finished" podID="29a395e7-cdab-40db-ae2c-1d746a31aeec" containerID="b829f1abe881ccf1d9a4b3ef3b15225ee5ee8a7301ae1750f3751e75f85d7cb7" exitCode=0 Feb 19 16:17:09 crc kubenswrapper[4810]: I0219 16:17:09.836564 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzwcn" event={"ID":"29a395e7-cdab-40db-ae2c-1d746a31aeec","Type":"ContainerDied","Data":"b829f1abe881ccf1d9a4b3ef3b15225ee5ee8a7301ae1750f3751e75f85d7cb7"} Feb 19 16:17:09 crc kubenswrapper[4810]: I0219 16:17:09.838860 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 16:17:10 crc kubenswrapper[4810]: I0219 16:17:10.856261 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzwcn" event={"ID":"29a395e7-cdab-40db-ae2c-1d746a31aeec","Type":"ContainerStarted","Data":"83153e32fc105bc211e810ffb7fbd7a3511dd3b91cf6cb2f1882279bdb7321fe"} Feb 19 16:17:11 crc kubenswrapper[4810]: I0219 16:17:11.870727 4810 generic.go:334] "Generic (PLEG): container finished" podID="29a395e7-cdab-40db-ae2c-1d746a31aeec" containerID="83153e32fc105bc211e810ffb7fbd7a3511dd3b91cf6cb2f1882279bdb7321fe" exitCode=0 Feb 19 16:17:11 crc kubenswrapper[4810]: I0219 16:17:11.870773 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzwcn" event={"ID":"29a395e7-cdab-40db-ae2c-1d746a31aeec","Type":"ContainerDied","Data":"83153e32fc105bc211e810ffb7fbd7a3511dd3b91cf6cb2f1882279bdb7321fe"} Feb 19 16:17:12 crc kubenswrapper[4810]: I0219 16:17:12.883113 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzwcn" event={"ID":"29a395e7-cdab-40db-ae2c-1d746a31aeec","Type":"ContainerStarted","Data":"d0e3429da6f9749d75a96ff2598af52b5193e6866e807dc84768878ca6720e4a"} Feb 19 16:17:18 crc kubenswrapper[4810]: I0219 16:17:18.228837 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:18 crc kubenswrapper[4810]: I0219 16:17:18.229460 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:18 crc kubenswrapper[4810]: I0219 16:17:18.301218 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:18 crc kubenswrapper[4810]: I0219 16:17:18.331261 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kzwcn" podStartSLOduration=8.856918612 podStartE2EDuration="11.331236552s" podCreationTimestamp="2026-02-19 16:17:07 +0000 UTC" firstStartedPulling="2026-02-19 16:17:09.838070034 +0000 UTC m=+4059.320100198" lastFinishedPulling="2026-02-19 16:17:12.312388004 +0000 UTC m=+4061.794418138" observedRunningTime="2026-02-19 16:17:12.911233613 +0000 UTC m=+4062.393263737" watchObservedRunningTime="2026-02-19 16:17:18.331236552 +0000 UTC m=+4067.813266706" Feb 19 16:17:19 crc kubenswrapper[4810]: I0219 16:17:19.018993 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:19 crc kubenswrapper[4810]: I0219 16:17:19.092497 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kzwcn"] Feb 19 16:17:20 crc kubenswrapper[4810]: I0219 16:17:20.961186 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kzwcn" podUID="29a395e7-cdab-40db-ae2c-1d746a31aeec" containerName="registry-server" containerID="cri-o://d0e3429da6f9749d75a96ff2598af52b5193e6866e807dc84768878ca6720e4a" gracePeriod=2 Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.526379 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.665694 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a395e7-cdab-40db-ae2c-1d746a31aeec-utilities\") pod \"29a395e7-cdab-40db-ae2c-1d746a31aeec\" (UID: \"29a395e7-cdab-40db-ae2c-1d746a31aeec\") " Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.666045 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xbc9\" (UniqueName: \"kubernetes.io/projected/29a395e7-cdab-40db-ae2c-1d746a31aeec-kube-api-access-4xbc9\") pod \"29a395e7-cdab-40db-ae2c-1d746a31aeec\" (UID: \"29a395e7-cdab-40db-ae2c-1d746a31aeec\") " Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.666161 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a395e7-cdab-40db-ae2c-1d746a31aeec-catalog-content\") pod \"29a395e7-cdab-40db-ae2c-1d746a31aeec\" (UID: \"29a395e7-cdab-40db-ae2c-1d746a31aeec\") " Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.666594 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a395e7-cdab-40db-ae2c-1d746a31aeec-utilities" (OuterVolumeSpecName: "utilities") pod "29a395e7-cdab-40db-ae2c-1d746a31aeec" (UID: "29a395e7-cdab-40db-ae2c-1d746a31aeec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.667884 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a395e7-cdab-40db-ae2c-1d746a31aeec-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.682578 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a395e7-cdab-40db-ae2c-1d746a31aeec-kube-api-access-4xbc9" (OuterVolumeSpecName: "kube-api-access-4xbc9") pod "29a395e7-cdab-40db-ae2c-1d746a31aeec" (UID: "29a395e7-cdab-40db-ae2c-1d746a31aeec"). InnerVolumeSpecName "kube-api-access-4xbc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.716495 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a395e7-cdab-40db-ae2c-1d746a31aeec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29a395e7-cdab-40db-ae2c-1d746a31aeec" (UID: "29a395e7-cdab-40db-ae2c-1d746a31aeec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.786519 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a395e7-cdab-40db-ae2c-1d746a31aeec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.786557 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xbc9\" (UniqueName: \"kubernetes.io/projected/29a395e7-cdab-40db-ae2c-1d746a31aeec-kube-api-access-4xbc9\") on node \"crc\" DevicePath \"\"" Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.979637 4810 generic.go:334] "Generic (PLEG): container finished" podID="29a395e7-cdab-40db-ae2c-1d746a31aeec" containerID="d0e3429da6f9749d75a96ff2598af52b5193e6866e807dc84768878ca6720e4a" exitCode=0 Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.979705 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzwcn" event={"ID":"29a395e7-cdab-40db-ae2c-1d746a31aeec","Type":"ContainerDied","Data":"d0e3429da6f9749d75a96ff2598af52b5193e6866e807dc84768878ca6720e4a"} Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.979730 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kzwcn" Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.979819 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzwcn" event={"ID":"29a395e7-cdab-40db-ae2c-1d746a31aeec","Type":"ContainerDied","Data":"d2ce1af4e07d6583d631ee1c7d3aaab708cde8706d5712cba248062381b3897d"} Feb 19 16:17:21 crc kubenswrapper[4810]: I0219 16:17:21.979856 4810 scope.go:117] "RemoveContainer" containerID="d0e3429da6f9749d75a96ff2598af52b5193e6866e807dc84768878ca6720e4a" Feb 19 16:17:22 crc kubenswrapper[4810]: I0219 16:17:22.036889 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kzwcn"] Feb 19 16:17:22 crc kubenswrapper[4810]: I0219 16:17:22.049561 4810 scope.go:117] "RemoveContainer" containerID="83153e32fc105bc211e810ffb7fbd7a3511dd3b91cf6cb2f1882279bdb7321fe" Feb 19 16:17:22 crc kubenswrapper[4810]: I0219 16:17:22.049678 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kzwcn"] Feb 19 16:17:22 crc kubenswrapper[4810]: I0219 16:17:22.072681 4810 scope.go:117] "RemoveContainer" containerID="b829f1abe881ccf1d9a4b3ef3b15225ee5ee8a7301ae1750f3751e75f85d7cb7" Feb 19 16:17:22 crc kubenswrapper[4810]: I0219 16:17:22.129538 4810 scope.go:117] "RemoveContainer" containerID="d0e3429da6f9749d75a96ff2598af52b5193e6866e807dc84768878ca6720e4a" Feb 19 16:17:22 crc kubenswrapper[4810]: E0219 16:17:22.130103 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e3429da6f9749d75a96ff2598af52b5193e6866e807dc84768878ca6720e4a\": container with ID starting with d0e3429da6f9749d75a96ff2598af52b5193e6866e807dc84768878ca6720e4a not found: ID does not exist" containerID="d0e3429da6f9749d75a96ff2598af52b5193e6866e807dc84768878ca6720e4a" Feb 19 16:17:22 crc kubenswrapper[4810]: I0219 16:17:22.130142 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e3429da6f9749d75a96ff2598af52b5193e6866e807dc84768878ca6720e4a"} err="failed to get container status \"d0e3429da6f9749d75a96ff2598af52b5193e6866e807dc84768878ca6720e4a\": rpc error: code = NotFound desc = could not find container \"d0e3429da6f9749d75a96ff2598af52b5193e6866e807dc84768878ca6720e4a\": container with ID starting with d0e3429da6f9749d75a96ff2598af52b5193e6866e807dc84768878ca6720e4a not found: ID does not exist" Feb 19 16:17:22 crc kubenswrapper[4810]: I0219 16:17:22.130169 4810 scope.go:117] "RemoveContainer" containerID="83153e32fc105bc211e810ffb7fbd7a3511dd3b91cf6cb2f1882279bdb7321fe" Feb 19 16:17:22 crc kubenswrapper[4810]: E0219 16:17:22.131119 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83153e32fc105bc211e810ffb7fbd7a3511dd3b91cf6cb2f1882279bdb7321fe\": container with ID starting with 83153e32fc105bc211e810ffb7fbd7a3511dd3b91cf6cb2f1882279bdb7321fe not found: ID does not exist" containerID="83153e32fc105bc211e810ffb7fbd7a3511dd3b91cf6cb2f1882279bdb7321fe" Feb 19 16:17:22 crc kubenswrapper[4810]: I0219 16:17:22.131155 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83153e32fc105bc211e810ffb7fbd7a3511dd3b91cf6cb2f1882279bdb7321fe"} err="failed to get container status \"83153e32fc105bc211e810ffb7fbd7a3511dd3b91cf6cb2f1882279bdb7321fe\": rpc error: code = NotFound desc = could not find container \"83153e32fc105bc211e810ffb7fbd7a3511dd3b91cf6cb2f1882279bdb7321fe\": container with ID starting with 83153e32fc105bc211e810ffb7fbd7a3511dd3b91cf6cb2f1882279bdb7321fe not found: ID does not exist" Feb 19 16:17:22 crc kubenswrapper[4810]: I0219 16:17:22.131174 4810 scope.go:117] "RemoveContainer" containerID="b829f1abe881ccf1d9a4b3ef3b15225ee5ee8a7301ae1750f3751e75f85d7cb7" Feb 19 16:17:22 crc kubenswrapper[4810]: E0219 16:17:22.131996 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b829f1abe881ccf1d9a4b3ef3b15225ee5ee8a7301ae1750f3751e75f85d7cb7\": container with ID starting with b829f1abe881ccf1d9a4b3ef3b15225ee5ee8a7301ae1750f3751e75f85d7cb7 not found: ID does not exist" containerID="b829f1abe881ccf1d9a4b3ef3b15225ee5ee8a7301ae1750f3751e75f85d7cb7" Feb 19 16:17:22 crc kubenswrapper[4810]: I0219 16:17:22.132025 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b829f1abe881ccf1d9a4b3ef3b15225ee5ee8a7301ae1750f3751e75f85d7cb7"} err="failed to get container status \"b829f1abe881ccf1d9a4b3ef3b15225ee5ee8a7301ae1750f3751e75f85d7cb7\": rpc error: code = NotFound desc = could not find container \"b829f1abe881ccf1d9a4b3ef3b15225ee5ee8a7301ae1750f3751e75f85d7cb7\": container with ID starting with b829f1abe881ccf1d9a4b3ef3b15225ee5ee8a7301ae1750f3751e75f85d7cb7 not found: ID does not exist" Feb 19 16:17:23 crc kubenswrapper[4810]: I0219 16:17:23.457351 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a395e7-cdab-40db-ae2c-1d746a31aeec" path="/var/lib/kubelet/pods/29a395e7-cdab-40db-ae2c-1d746a31aeec/volumes" Feb 19 16:17:49 crc kubenswrapper[4810]: I0219 16:17:49.537805 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:17:49 crc kubenswrapper[4810]: I0219 16:17:49.538529 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:18:02 crc kubenswrapper[4810]: I0219 16:18:02.206630 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9mm6p"] Feb 19 16:18:02 crc kubenswrapper[4810]: E0219 16:18:02.207677 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a395e7-cdab-40db-ae2c-1d746a31aeec" containerName="extract-content" Feb 19 16:18:02 crc kubenswrapper[4810]: I0219 16:18:02.207696 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a395e7-cdab-40db-ae2c-1d746a31aeec" containerName="extract-content" Feb 19 16:18:02 crc kubenswrapper[4810]: E0219 16:18:02.207722 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a395e7-cdab-40db-ae2c-1d746a31aeec" containerName="extract-utilities" Feb 19 16:18:02 crc kubenswrapper[4810]: I0219 16:18:02.207731 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a395e7-cdab-40db-ae2c-1d746a31aeec" containerName="extract-utilities" Feb 19 16:18:02 crc kubenswrapper[4810]: E0219 16:18:02.207759 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a395e7-cdab-40db-ae2c-1d746a31aeec" containerName="registry-server" Feb 19 16:18:02 crc kubenswrapper[4810]: I0219 16:18:02.207768 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a395e7-cdab-40db-ae2c-1d746a31aeec" containerName="registry-server" Feb 19 16:18:02 crc kubenswrapper[4810]: I0219 16:18:02.207990 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a395e7-cdab-40db-ae2c-1d746a31aeec" containerName="registry-server" Feb 19 16:18:02 crc kubenswrapper[4810]: I0219 16:18:02.209632 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:02 crc kubenswrapper[4810]: I0219 16:18:02.227380 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mm6p"] Feb 19 16:18:03 crc kubenswrapper[4810]: I0219 16:18:02.933143 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a123ea72-e0fc-43b7-b1d9-af79a382d010-catalog-content\") pod \"redhat-marketplace-9mm6p\" (UID: \"a123ea72-e0fc-43b7-b1d9-af79a382d010\") " pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:03 crc kubenswrapper[4810]: I0219 16:18:02.933340 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95tfz\" (UniqueName: \"kubernetes.io/projected/a123ea72-e0fc-43b7-b1d9-af79a382d010-kube-api-access-95tfz\") pod \"redhat-marketplace-9mm6p\" (UID: \"a123ea72-e0fc-43b7-b1d9-af79a382d010\") " pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:03 crc kubenswrapper[4810]: I0219 16:18:02.933366 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a123ea72-e0fc-43b7-b1d9-af79a382d010-utilities\") pod \"redhat-marketplace-9mm6p\" (UID: \"a123ea72-e0fc-43b7-b1d9-af79a382d010\") " pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:03 crc kubenswrapper[4810]: I0219 16:18:03.092486 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95tfz\" (UniqueName: \"kubernetes.io/projected/a123ea72-e0fc-43b7-b1d9-af79a382d010-kube-api-access-95tfz\") pod \"redhat-marketplace-9mm6p\" (UID: \"a123ea72-e0fc-43b7-b1d9-af79a382d010\") " pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:03 crc kubenswrapper[4810]: I0219 16:18:03.092527 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a123ea72-e0fc-43b7-b1d9-af79a382d010-utilities\") pod \"redhat-marketplace-9mm6p\" (UID: \"a123ea72-e0fc-43b7-b1d9-af79a382d010\") " pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:03 crc kubenswrapper[4810]: I0219 16:18:03.092617 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a123ea72-e0fc-43b7-b1d9-af79a382d010-catalog-content\") pod \"redhat-marketplace-9mm6p\" (UID: \"a123ea72-e0fc-43b7-b1d9-af79a382d010\") " pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:03 crc kubenswrapper[4810]: I0219 16:18:03.093171 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a123ea72-e0fc-43b7-b1d9-af79a382d010-catalog-content\") pod \"redhat-marketplace-9mm6p\" (UID: \"a123ea72-e0fc-43b7-b1d9-af79a382d010\") " pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:03 crc kubenswrapper[4810]: I0219 16:18:03.093441 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a123ea72-e0fc-43b7-b1d9-af79a382d010-utilities\") pod \"redhat-marketplace-9mm6p\" (UID: \"a123ea72-e0fc-43b7-b1d9-af79a382d010\") " pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:03 crc kubenswrapper[4810]: I0219 16:18:03.136130 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95tfz\" (UniqueName: \"kubernetes.io/projected/a123ea72-e0fc-43b7-b1d9-af79a382d010-kube-api-access-95tfz\") pod \"redhat-marketplace-9mm6p\" (UID: \"a123ea72-e0fc-43b7-b1d9-af79a382d010\") " pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:03 crc kubenswrapper[4810]: I0219 16:18:03.428509 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:03 crc kubenswrapper[4810]: I0219 16:18:03.927353 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mm6p"] Feb 19 16:18:04 crc kubenswrapper[4810]: I0219 16:18:04.172307 4810 generic.go:334] "Generic (PLEG): container finished" podID="a123ea72-e0fc-43b7-b1d9-af79a382d010" containerID="727e747b80d7b6f50c3abcbcf2d1a7df66b394bb5d4bed9f6c637270e5269b28" exitCode=0 Feb 19 16:18:04 crc kubenswrapper[4810]: I0219 16:18:04.172381 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mm6p" event={"ID":"a123ea72-e0fc-43b7-b1d9-af79a382d010","Type":"ContainerDied","Data":"727e747b80d7b6f50c3abcbcf2d1a7df66b394bb5d4bed9f6c637270e5269b28"} Feb 19 16:18:04 crc kubenswrapper[4810]: I0219 16:18:04.172413 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mm6p" event={"ID":"a123ea72-e0fc-43b7-b1d9-af79a382d010","Type":"ContainerStarted","Data":"6d3216b043794cb744c98cc73cbb45b1ff615a9067efafa48c652be47cd88065"} Feb 19 16:18:06 crc kubenswrapper[4810]: I0219 16:18:06.197356 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mm6p" event={"ID":"a123ea72-e0fc-43b7-b1d9-af79a382d010","Type":"ContainerStarted","Data":"a0036126afa42137de8816cea32a8f172bdb2f49584406dc4b2b19e61cff06cc"} Feb 19 16:18:07 crc kubenswrapper[4810]: I0219 16:18:07.210980 4810 generic.go:334] "Generic (PLEG): container finished" podID="a123ea72-e0fc-43b7-b1d9-af79a382d010" containerID="a0036126afa42137de8816cea32a8f172bdb2f49584406dc4b2b19e61cff06cc" exitCode=0 Feb 19 16:18:07 crc kubenswrapper[4810]: I0219 16:18:07.211032 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mm6p" event={"ID":"a123ea72-e0fc-43b7-b1d9-af79a382d010","Type":"ContainerDied","Data":"a0036126afa42137de8816cea32a8f172bdb2f49584406dc4b2b19e61cff06cc"} Feb 19 16:18:07 crc kubenswrapper[4810]: I0219 16:18:07.778942 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bwbpv"] Feb 19 16:18:07 crc kubenswrapper[4810]: I0219 16:18:07.782344 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:07 crc kubenswrapper[4810]: I0219 16:18:07.794786 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bwbpv"] Feb 19 16:18:07 crc kubenswrapper[4810]: I0219 16:18:07.894582 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx454\" (UniqueName: \"kubernetes.io/projected/05b0324a-36c1-419b-8bdd-e41ad42a6a3f-kube-api-access-bx454\") pod \"redhat-operators-bwbpv\" (UID: \"05b0324a-36c1-419b-8bdd-e41ad42a6a3f\") " pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:07 crc kubenswrapper[4810]: I0219 16:18:07.894715 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b0324a-36c1-419b-8bdd-e41ad42a6a3f-utilities\") pod \"redhat-operators-bwbpv\" (UID: \"05b0324a-36c1-419b-8bdd-e41ad42a6a3f\") " pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:07 crc kubenswrapper[4810]: I0219 16:18:07.894964 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b0324a-36c1-419b-8bdd-e41ad42a6a3f-catalog-content\") pod \"redhat-operators-bwbpv\" (UID: \"05b0324a-36c1-419b-8bdd-e41ad42a6a3f\") " pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:07 crc kubenswrapper[4810]: I0219 16:18:07.996646 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b0324a-36c1-419b-8bdd-e41ad42a6a3f-catalog-content\") pod \"redhat-operators-bwbpv\" (UID: \"05b0324a-36c1-419b-8bdd-e41ad42a6a3f\") " pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:07 crc kubenswrapper[4810]: I0219 16:18:07.997007 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx454\" (UniqueName: \"kubernetes.io/projected/05b0324a-36c1-419b-8bdd-e41ad42a6a3f-kube-api-access-bx454\") pod \"redhat-operators-bwbpv\" (UID: \"05b0324a-36c1-419b-8bdd-e41ad42a6a3f\") " pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:07 crc kubenswrapper[4810]: I0219 16:18:07.997047 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b0324a-36c1-419b-8bdd-e41ad42a6a3f-utilities\") pod \"redhat-operators-bwbpv\" (UID: \"05b0324a-36c1-419b-8bdd-e41ad42a6a3f\") " pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:07 crc kubenswrapper[4810]: I0219 16:18:07.997137 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b0324a-36c1-419b-8bdd-e41ad42a6a3f-catalog-content\") pod \"redhat-operators-bwbpv\" (UID: \"05b0324a-36c1-419b-8bdd-e41ad42a6a3f\") " pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:07 crc kubenswrapper[4810]: I0219 16:18:07.997463 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b0324a-36c1-419b-8bdd-e41ad42a6a3f-utilities\") pod \"redhat-operators-bwbpv\" (UID: \"05b0324a-36c1-419b-8bdd-e41ad42a6a3f\") " pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:08 crc kubenswrapper[4810]: I0219 16:18:08.015476 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx454\" (UniqueName: \"kubernetes.io/projected/05b0324a-36c1-419b-8bdd-e41ad42a6a3f-kube-api-access-bx454\") pod \"redhat-operators-bwbpv\" (UID: \"05b0324a-36c1-419b-8bdd-e41ad42a6a3f\") " pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:08 crc kubenswrapper[4810]: I0219 16:18:08.102253 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:08 crc kubenswrapper[4810]: I0219 16:18:08.238607 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mm6p" event={"ID":"a123ea72-e0fc-43b7-b1d9-af79a382d010","Type":"ContainerStarted","Data":"069b1791957e989d646d3208ea272273d0d112531c277fc7259a952f9cba60cf"} Feb 19 16:18:08 crc kubenswrapper[4810]: I0219 16:18:08.284583 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9mm6p" podStartSLOduration=2.85304976 podStartE2EDuration="6.284562965s" podCreationTimestamp="2026-02-19 16:18:02 +0000 UTC" firstStartedPulling="2026-02-19 16:18:04.174535469 +0000 UTC m=+4113.656565603" lastFinishedPulling="2026-02-19 16:18:07.606048674 +0000 UTC m=+4117.088078808" observedRunningTime="2026-02-19 16:18:08.263105154 +0000 UTC m=+4117.745135308" watchObservedRunningTime="2026-02-19 16:18:08.284562965 +0000 UTC m=+4117.766593089" Feb 19 16:18:08 crc kubenswrapper[4810]: I0219 16:18:08.747659 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bwbpv"] Feb 19 16:18:09 crc kubenswrapper[4810]: I0219 16:18:09.247181 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwbpv" event={"ID":"05b0324a-36c1-419b-8bdd-e41ad42a6a3f","Type":"ContainerStarted","Data":"93a8135c28be8635f5340efbe9657c055e768c0e32dd5af6f1b72560d483c3e0"} Feb 19 16:18:10 crc kubenswrapper[4810]: I0219 16:18:10.262693 4810 generic.go:334] "Generic (PLEG): container finished" podID="05b0324a-36c1-419b-8bdd-e41ad42a6a3f" containerID="9a684aad47e63c793748f0e63d3e8324617bba9ecfb36e6d1acb733f20e678a2" exitCode=0 Feb 19 16:18:10 crc kubenswrapper[4810]: I0219 16:18:10.262841 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwbpv" event={"ID":"05b0324a-36c1-419b-8bdd-e41ad42a6a3f","Type":"ContainerDied","Data":"9a684aad47e63c793748f0e63d3e8324617bba9ecfb36e6d1acb733f20e678a2"} Feb 19 16:18:13 crc kubenswrapper[4810]: I0219 16:18:13.429627 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:13 crc kubenswrapper[4810]: I0219 16:18:13.430107 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:13 crc kubenswrapper[4810]: I0219 16:18:13.512265 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:14 crc kubenswrapper[4810]: I0219 16:18:14.349114 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:14 crc kubenswrapper[4810]: I0219 16:18:14.772877 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mm6p"] Feb 19 16:18:16 crc kubenswrapper[4810]: I0219 16:18:16.326037 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9mm6p" podUID="a123ea72-e0fc-43b7-b1d9-af79a382d010" containerName="registry-server" containerID="cri-o://069b1791957e989d646d3208ea272273d0d112531c277fc7259a952f9cba60cf" gracePeriod=2 Feb 19 16:18:17 crc kubenswrapper[4810]: I0219 16:18:17.337909 4810 generic.go:334] "Generic (PLEG): container finished" podID="a123ea72-e0fc-43b7-b1d9-af79a382d010" containerID="069b1791957e989d646d3208ea272273d0d112531c277fc7259a952f9cba60cf" exitCode=0 Feb 19 16:18:17 crc kubenswrapper[4810]: I0219 16:18:17.337953 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mm6p" event={"ID":"a123ea72-e0fc-43b7-b1d9-af79a382d010","Type":"ContainerDied","Data":"069b1791957e989d646d3208ea272273d0d112531c277fc7259a952f9cba60cf"} Feb 19 16:18:19 crc kubenswrapper[4810]: I0219 16:18:19.537105 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:18:19 crc kubenswrapper[4810]: I0219 16:18:19.537359 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:18:19 crc kubenswrapper[4810]: I0219 16:18:19.771589 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:19 crc kubenswrapper[4810]: I0219 16:18:19.844033 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a123ea72-e0fc-43b7-b1d9-af79a382d010-catalog-content\") pod \"a123ea72-e0fc-43b7-b1d9-af79a382d010\" (UID: \"a123ea72-e0fc-43b7-b1d9-af79a382d010\") " Feb 19 16:18:19 crc kubenswrapper[4810]: I0219 16:18:19.844143 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a123ea72-e0fc-43b7-b1d9-af79a382d010-utilities\") pod \"a123ea72-e0fc-43b7-b1d9-af79a382d010\" (UID: \"a123ea72-e0fc-43b7-b1d9-af79a382d010\") " Feb 19 16:18:19 crc kubenswrapper[4810]: I0219 16:18:19.844187 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95tfz\" (UniqueName: \"kubernetes.io/projected/a123ea72-e0fc-43b7-b1d9-af79a382d010-kube-api-access-95tfz\") pod \"a123ea72-e0fc-43b7-b1d9-af79a382d010\" (UID: \"a123ea72-e0fc-43b7-b1d9-af79a382d010\") " Feb 19 16:18:19 crc kubenswrapper[4810]: I0219 16:18:19.849961 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a123ea72-e0fc-43b7-b1d9-af79a382d010-utilities" (OuterVolumeSpecName: "utilities") pod "a123ea72-e0fc-43b7-b1d9-af79a382d010" (UID: "a123ea72-e0fc-43b7-b1d9-af79a382d010"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:18:19 crc kubenswrapper[4810]: I0219 16:18:19.851509 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a123ea72-e0fc-43b7-b1d9-af79a382d010-kube-api-access-95tfz" (OuterVolumeSpecName: "kube-api-access-95tfz") pod "a123ea72-e0fc-43b7-b1d9-af79a382d010" (UID: "a123ea72-e0fc-43b7-b1d9-af79a382d010"). InnerVolumeSpecName "kube-api-access-95tfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:18:19 crc kubenswrapper[4810]: I0219 16:18:19.875101 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a123ea72-e0fc-43b7-b1d9-af79a382d010-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a123ea72-e0fc-43b7-b1d9-af79a382d010" (UID: "a123ea72-e0fc-43b7-b1d9-af79a382d010"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:18:19 crc kubenswrapper[4810]: I0219 16:18:19.951403 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a123ea72-e0fc-43b7-b1d9-af79a382d010-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:18:19 crc kubenswrapper[4810]: I0219 16:18:19.951435 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a123ea72-e0fc-43b7-b1d9-af79a382d010-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:18:19 crc kubenswrapper[4810]: I0219 16:18:19.951445 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95tfz\" (UniqueName: \"kubernetes.io/projected/a123ea72-e0fc-43b7-b1d9-af79a382d010-kube-api-access-95tfz\") on node \"crc\" DevicePath \"\"" Feb 19 16:18:20 crc kubenswrapper[4810]: I0219 16:18:20.366672 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9mm6p" Feb 19 16:18:20 crc kubenswrapper[4810]: I0219 16:18:20.366670 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9mm6p" event={"ID":"a123ea72-e0fc-43b7-b1d9-af79a382d010","Type":"ContainerDied","Data":"6d3216b043794cb744c98cc73cbb45b1ff615a9067efafa48c652be47cd88065"} Feb 19 16:18:20 crc kubenswrapper[4810]: I0219 16:18:20.366822 4810 scope.go:117] "RemoveContainer" containerID="069b1791957e989d646d3208ea272273d0d112531c277fc7259a952f9cba60cf" Feb 19 16:18:20 crc kubenswrapper[4810]: I0219 16:18:20.369386 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwbpv" event={"ID":"05b0324a-36c1-419b-8bdd-e41ad42a6a3f","Type":"ContainerStarted","Data":"79902e166f5edf35fbc7fb13052676c944ef7dba0af06603968486e0cdfcd6fe"} Feb 19 16:18:20 crc kubenswrapper[4810]: I0219 16:18:20.417656 4810 scope.go:117] "RemoveContainer" containerID="a0036126afa42137de8816cea32a8f172bdb2f49584406dc4b2b19e61cff06cc" Feb 19 16:18:20 crc kubenswrapper[4810]: I0219 16:18:20.429439 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mm6p"] Feb 19 16:18:20 crc kubenswrapper[4810]: I0219 16:18:20.443670 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9mm6p"] Feb 19 16:18:20 crc kubenswrapper[4810]: I0219 16:18:20.538876 4810 scope.go:117] "RemoveContainer" containerID="727e747b80d7b6f50c3abcbcf2d1a7df66b394bb5d4bed9f6c637270e5269b28" Feb 19 16:18:21 crc kubenswrapper[4810]: I0219 16:18:21.460942 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a123ea72-e0fc-43b7-b1d9-af79a382d010" path="/var/lib/kubelet/pods/a123ea72-e0fc-43b7-b1d9-af79a382d010/volumes" Feb 19 16:18:23 crc kubenswrapper[4810]: I0219 16:18:23.410189 4810 generic.go:334] "Generic (PLEG): container finished" podID="05b0324a-36c1-419b-8bdd-e41ad42a6a3f" containerID="79902e166f5edf35fbc7fb13052676c944ef7dba0af06603968486e0cdfcd6fe" exitCode=0 Feb 19 16:18:23 crc kubenswrapper[4810]: I0219 16:18:23.410353 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwbpv" event={"ID":"05b0324a-36c1-419b-8bdd-e41ad42a6a3f","Type":"ContainerDied","Data":"79902e166f5edf35fbc7fb13052676c944ef7dba0af06603968486e0cdfcd6fe"} Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.218073 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fsjqs"] Feb 19 16:18:24 crc kubenswrapper[4810]: E0219 16:18:24.219101 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a123ea72-e0fc-43b7-b1d9-af79a382d010" containerName="registry-server" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.219131 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a123ea72-e0fc-43b7-b1d9-af79a382d010" containerName="registry-server" Feb 19 16:18:24 crc kubenswrapper[4810]: E0219 16:18:24.219181 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a123ea72-e0fc-43b7-b1d9-af79a382d010" containerName="extract-utilities" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.219195 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a123ea72-e0fc-43b7-b1d9-af79a382d010" containerName="extract-utilities" Feb 19 16:18:24 crc kubenswrapper[4810]: E0219 16:18:24.219224 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a123ea72-e0fc-43b7-b1d9-af79a382d010" containerName="extract-content" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.219237 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a123ea72-e0fc-43b7-b1d9-af79a382d010" containerName="extract-content" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.219895 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a123ea72-e0fc-43b7-b1d9-af79a382d010" containerName="registry-server" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.223579 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.239263 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsjqs"] Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.365588 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rskwc\" (UniqueName: \"kubernetes.io/projected/18dc9098-5a5e-438e-88d7-b611f88e7e56-kube-api-access-rskwc\") pod \"certified-operators-fsjqs\" (UID: \"18dc9098-5a5e-438e-88d7-b611f88e7e56\") " pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.365867 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dc9098-5a5e-438e-88d7-b611f88e7e56-catalog-content\") pod \"certified-operators-fsjqs\" (UID: \"18dc9098-5a5e-438e-88d7-b611f88e7e56\") " pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.366054 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dc9098-5a5e-438e-88d7-b611f88e7e56-utilities\") pod \"certified-operators-fsjqs\" (UID: \"18dc9098-5a5e-438e-88d7-b611f88e7e56\") " pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.468607 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rskwc\" (UniqueName: \"kubernetes.io/projected/18dc9098-5a5e-438e-88d7-b611f88e7e56-kube-api-access-rskwc\") pod \"certified-operators-fsjqs\" (UID: \"18dc9098-5a5e-438e-88d7-b611f88e7e56\") " pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.468818 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dc9098-5a5e-438e-88d7-b611f88e7e56-catalog-content\") pod \"certified-operators-fsjqs\" (UID: \"18dc9098-5a5e-438e-88d7-b611f88e7e56\") " pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.468926 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dc9098-5a5e-438e-88d7-b611f88e7e56-utilities\") pod \"certified-operators-fsjqs\" (UID: \"18dc9098-5a5e-438e-88d7-b611f88e7e56\") " pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.469406 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dc9098-5a5e-438e-88d7-b611f88e7e56-utilities\") pod \"certified-operators-fsjqs\" (UID: \"18dc9098-5a5e-438e-88d7-b611f88e7e56\") " pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.469406 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dc9098-5a5e-438e-88d7-b611f88e7e56-catalog-content\") pod \"certified-operators-fsjqs\" (UID: \"18dc9098-5a5e-438e-88d7-b611f88e7e56\") " pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.489480 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rskwc\" (UniqueName: \"kubernetes.io/projected/18dc9098-5a5e-438e-88d7-b611f88e7e56-kube-api-access-rskwc\") pod \"certified-operators-fsjqs\" (UID: \"18dc9098-5a5e-438e-88d7-b611f88e7e56\") " pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:24 crc kubenswrapper[4810]: I0219 16:18:24.580370 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:25 crc kubenswrapper[4810]: I0219 16:18:25.111470 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsjqs"] Feb 19 16:18:25 crc kubenswrapper[4810]: I0219 16:18:25.430609 4810 generic.go:334] "Generic (PLEG): container finished" podID="18dc9098-5a5e-438e-88d7-b611f88e7e56" containerID="f8f03eba801535218b99db7ea105ed2a091125b7d3975eb406370048e19ab062" exitCode=0 Feb 19 16:18:25 crc kubenswrapper[4810]: I0219 16:18:25.430926 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsjqs" event={"ID":"18dc9098-5a5e-438e-88d7-b611f88e7e56","Type":"ContainerDied","Data":"f8f03eba801535218b99db7ea105ed2a091125b7d3975eb406370048e19ab062"} Feb 19 16:18:25 crc kubenswrapper[4810]: I0219 16:18:25.430967 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsjqs" event={"ID":"18dc9098-5a5e-438e-88d7-b611f88e7e56","Type":"ContainerStarted","Data":"543aee6faf2d0487412dcc9038ac404f9f42cb7c9ccaddf0001984891f549f8e"} Feb 19 16:18:25 crc kubenswrapper[4810]: I0219 16:18:25.454394 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwbpv" event={"ID":"05b0324a-36c1-419b-8bdd-e41ad42a6a3f","Type":"ContainerStarted","Data":"89278896eb5981099add6badbd7a3e1af7af3548857a31a60c78e5beeb65a030"} Feb 19 16:18:25 crc kubenswrapper[4810]: I0219 16:18:25.477510 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bwbpv" podStartSLOduration=4.199355101 podStartE2EDuration="18.477493323s" podCreationTimestamp="2026-02-19 16:18:07 +0000 UTC" firstStartedPulling="2026-02-19 16:18:10.265773298 +0000 UTC m=+4119.747803462" lastFinishedPulling="2026-02-19 16:18:24.54391155 +0000 UTC m=+4134.025941684" observedRunningTime="2026-02-19 16:18:25.472582141 +0000 UTC m=+4134.954612275" watchObservedRunningTime="2026-02-19 16:18:25.477493323 +0000 UTC m=+4134.959523447" Feb 19 16:18:26 crc kubenswrapper[4810]: I0219 16:18:26.455457 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsjqs" event={"ID":"18dc9098-5a5e-438e-88d7-b611f88e7e56","Type":"ContainerStarted","Data":"1cb29dd13e72da68f942f209eb6e0550ffc40a6639757fdf8af6c17ba48e7f94"} Feb 19 16:18:28 crc kubenswrapper[4810]: I0219 16:18:28.103249 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:28 crc kubenswrapper[4810]: I0219 16:18:28.103532 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:28 crc kubenswrapper[4810]: I0219 16:18:28.476649 4810 generic.go:334] "Generic (PLEG): container finished" podID="18dc9098-5a5e-438e-88d7-b611f88e7e56" containerID="1cb29dd13e72da68f942f209eb6e0550ffc40a6639757fdf8af6c17ba48e7f94" exitCode=0 Feb 19 16:18:28 crc kubenswrapper[4810]: I0219 16:18:28.476699 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsjqs" event={"ID":"18dc9098-5a5e-438e-88d7-b611f88e7e56","Type":"ContainerDied","Data":"1cb29dd13e72da68f942f209eb6e0550ffc40a6639757fdf8af6c17ba48e7f94"} Feb 19 16:18:29 crc kubenswrapper[4810]: I0219 16:18:29.159136 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bwbpv" podUID="05b0324a-36c1-419b-8bdd-e41ad42a6a3f" containerName="registry-server" probeResult="failure" output=< Feb 19 16:18:29 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 16:18:29 crc kubenswrapper[4810]: > Feb 19 16:18:29 crc kubenswrapper[4810]: I0219 16:18:29.492037 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsjqs" event={"ID":"18dc9098-5a5e-438e-88d7-b611f88e7e56","Type":"ContainerStarted","Data":"f5a6dd294de86b91df249d4c74319c874e8cb7930dab7621079b8f6cbbf62a7e"} Feb 19 16:18:29 crc kubenswrapper[4810]: I0219 16:18:29.517442 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fsjqs" podStartSLOduration=2.061526893 podStartE2EDuration="5.517419015s" podCreationTimestamp="2026-02-19 16:18:24 +0000 UTC" firstStartedPulling="2026-02-19 16:18:25.434771294 +0000 UTC m=+4134.916801428" lastFinishedPulling="2026-02-19 16:18:28.890663426 +0000 UTC m=+4138.372693550" observedRunningTime="2026-02-19 16:18:29.516637236 +0000 UTC m=+4138.998667400" watchObservedRunningTime="2026-02-19 16:18:29.517419015 +0000 UTC m=+4138.999449149" Feb 19 16:18:34 crc kubenswrapper[4810]: I0219 16:18:34.581351 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:34 crc kubenswrapper[4810]: I0219 16:18:34.582057 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:34 crc kubenswrapper[4810]: I0219 16:18:34.656568 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:35 crc kubenswrapper[4810]: I0219 16:18:35.648083 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:35 crc kubenswrapper[4810]: I0219 16:18:35.720548 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsjqs"] Feb 19 16:18:37 crc kubenswrapper[4810]: I0219 16:18:37.587321 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fsjqs" podUID="18dc9098-5a5e-438e-88d7-b611f88e7e56" containerName="registry-server" containerID="cri-o://f5a6dd294de86b91df249d4c74319c874e8cb7930dab7621079b8f6cbbf62a7e" gracePeriod=2 Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.143096 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.190186 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.211123 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dc9098-5a5e-438e-88d7-b611f88e7e56-catalog-content\") pod \"18dc9098-5a5e-438e-88d7-b611f88e7e56\" (UID: \"18dc9098-5a5e-438e-88d7-b611f88e7e56\") " Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.211240 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rskwc\" (UniqueName: \"kubernetes.io/projected/18dc9098-5a5e-438e-88d7-b611f88e7e56-kube-api-access-rskwc\") pod \"18dc9098-5a5e-438e-88d7-b611f88e7e56\" (UID: \"18dc9098-5a5e-438e-88d7-b611f88e7e56\") " Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.211378 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dc9098-5a5e-438e-88d7-b611f88e7e56-utilities\") pod \"18dc9098-5a5e-438e-88d7-b611f88e7e56\" (UID: \"18dc9098-5a5e-438e-88d7-b611f88e7e56\") " Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.212177 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18dc9098-5a5e-438e-88d7-b611f88e7e56-utilities" (OuterVolumeSpecName: "utilities") pod "18dc9098-5a5e-438e-88d7-b611f88e7e56" (UID: "18dc9098-5a5e-438e-88d7-b611f88e7e56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.220716 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18dc9098-5a5e-438e-88d7-b611f88e7e56-kube-api-access-rskwc" (OuterVolumeSpecName: "kube-api-access-rskwc") pod "18dc9098-5a5e-438e-88d7-b611f88e7e56" (UID: "18dc9098-5a5e-438e-88d7-b611f88e7e56"). InnerVolumeSpecName "kube-api-access-rskwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.243806 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bwbpv" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.277047 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18dc9098-5a5e-438e-88d7-b611f88e7e56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18dc9098-5a5e-438e-88d7-b611f88e7e56" (UID: "18dc9098-5a5e-438e-88d7-b611f88e7e56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.313436 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dc9098-5a5e-438e-88d7-b611f88e7e56-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.313464 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dc9098-5a5e-438e-88d7-b611f88e7e56-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.313475 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rskwc\" (UniqueName: \"kubernetes.io/projected/18dc9098-5a5e-438e-88d7-b611f88e7e56-kube-api-access-rskwc\") on node \"crc\" DevicePath \"\"" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.600073 4810 generic.go:334] "Generic (PLEG): container finished" podID="18dc9098-5a5e-438e-88d7-b611f88e7e56" containerID="f5a6dd294de86b91df249d4c74319c874e8cb7930dab7621079b8f6cbbf62a7e" exitCode=0 Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.600169 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsjqs" event={"ID":"18dc9098-5a5e-438e-88d7-b611f88e7e56","Type":"ContainerDied","Data":"f5a6dd294de86b91df249d4c74319c874e8cb7930dab7621079b8f6cbbf62a7e"} Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.600403 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsjqs" event={"ID":"18dc9098-5a5e-438e-88d7-b611f88e7e56","Type":"ContainerDied","Data":"543aee6faf2d0487412dcc9038ac404f9f42cb7c9ccaddf0001984891f549f8e"} Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.600440 4810 scope.go:117] "RemoveContainer" containerID="f5a6dd294de86b91df249d4c74319c874e8cb7930dab7621079b8f6cbbf62a7e" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.600209 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsjqs" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.631389 4810 scope.go:117] "RemoveContainer" containerID="1cb29dd13e72da68f942f209eb6e0550ffc40a6639757fdf8af6c17ba48e7f94" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.676227 4810 scope.go:117] "RemoveContainer" containerID="f8f03eba801535218b99db7ea105ed2a091125b7d3975eb406370048e19ab062" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.714207 4810 scope.go:117] "RemoveContainer" containerID="f5a6dd294de86b91df249d4c74319c874e8cb7930dab7621079b8f6cbbf62a7e" Feb 19 16:18:38 crc kubenswrapper[4810]: E0219 16:18:38.714854 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5a6dd294de86b91df249d4c74319c874e8cb7930dab7621079b8f6cbbf62a7e\": container with ID starting with f5a6dd294de86b91df249d4c74319c874e8cb7930dab7621079b8f6cbbf62a7e not found: ID does not exist" containerID="f5a6dd294de86b91df249d4c74319c874e8cb7930dab7621079b8f6cbbf62a7e" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.714907 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5a6dd294de86b91df249d4c74319c874e8cb7930dab7621079b8f6cbbf62a7e"} err="failed to get container status \"f5a6dd294de86b91df249d4c74319c874e8cb7930dab7621079b8f6cbbf62a7e\": rpc error: code = NotFound desc = could not find container \"f5a6dd294de86b91df249d4c74319c874e8cb7930dab7621079b8f6cbbf62a7e\": container with ID starting with f5a6dd294de86b91df249d4c74319c874e8cb7930dab7621079b8f6cbbf62a7e not found: ID does not exist" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.714936 4810 scope.go:117] "RemoveContainer" containerID="1cb29dd13e72da68f942f209eb6e0550ffc40a6639757fdf8af6c17ba48e7f94" Feb 19 16:18:38 crc kubenswrapper[4810]: E0219 16:18:38.715342 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cb29dd13e72da68f942f209eb6e0550ffc40a6639757fdf8af6c17ba48e7f94\": container with ID starting with 1cb29dd13e72da68f942f209eb6e0550ffc40a6639757fdf8af6c17ba48e7f94 not found: ID does not exist" containerID="1cb29dd13e72da68f942f209eb6e0550ffc40a6639757fdf8af6c17ba48e7f94" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.715388 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cb29dd13e72da68f942f209eb6e0550ffc40a6639757fdf8af6c17ba48e7f94"} err="failed to get container status \"1cb29dd13e72da68f942f209eb6e0550ffc40a6639757fdf8af6c17ba48e7f94\": rpc error: code = NotFound desc = could not find container \"1cb29dd13e72da68f942f209eb6e0550ffc40a6639757fdf8af6c17ba48e7f94\": container with ID starting with 1cb29dd13e72da68f942f209eb6e0550ffc40a6639757fdf8af6c17ba48e7f94 not found: ID does not exist" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.715425 4810 scope.go:117] "RemoveContainer" containerID="f8f03eba801535218b99db7ea105ed2a091125b7d3975eb406370048e19ab062" Feb 19 16:18:38 crc kubenswrapper[4810]: E0219 16:18:38.715830 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8f03eba801535218b99db7ea105ed2a091125b7d3975eb406370048e19ab062\": container with ID starting with f8f03eba801535218b99db7ea105ed2a091125b7d3975eb406370048e19ab062 not found: ID does not exist" containerID="f8f03eba801535218b99db7ea105ed2a091125b7d3975eb406370048e19ab062" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.715855 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f03eba801535218b99db7ea105ed2a091125b7d3975eb406370048e19ab062"} err="failed to get container status \"f8f03eba801535218b99db7ea105ed2a091125b7d3975eb406370048e19ab062\": rpc error: code = NotFound desc = could not find container \"f8f03eba801535218b99db7ea105ed2a091125b7d3975eb406370048e19ab062\": container with ID starting with f8f03eba801535218b99db7ea105ed2a091125b7d3975eb406370048e19ab062 not found: ID does not exist" Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.725839 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsjqs"] Feb 19 16:18:38 crc kubenswrapper[4810]: I0219 16:18:38.736286 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fsjqs"] Feb 19 16:18:39 crc kubenswrapper[4810]: I0219 16:18:39.457319 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18dc9098-5a5e-438e-88d7-b611f88e7e56" path="/var/lib/kubelet/pods/18dc9098-5a5e-438e-88d7-b611f88e7e56/volumes" Feb 19 16:18:39 crc kubenswrapper[4810]: I0219 16:18:39.538006 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bwbpv"] Feb 19 16:18:39 crc kubenswrapper[4810]: I0219 16:18:39.918012 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gzkwp"] Feb 19 16:18:39 crc kubenswrapper[4810]: I0219 16:18:39.919040 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gzkwp" podUID="cb41d90e-0896-4229-a19b-a8577292bbf6" containerName="registry-server" containerID="cri-o://4dcb06a25fb7e17e0617e078d6ec18aa0f057a2f73d88aa7a8ec24cf486c4034" gracePeriod=2 Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.438794 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.462488 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb41d90e-0896-4229-a19b-a8577292bbf6-catalog-content\") pod \"cb41d90e-0896-4229-a19b-a8577292bbf6\" (UID: \"cb41d90e-0896-4229-a19b-a8577292bbf6\") " Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.462555 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb41d90e-0896-4229-a19b-a8577292bbf6-utilities\") pod \"cb41d90e-0896-4229-a19b-a8577292bbf6\" (UID: \"cb41d90e-0896-4229-a19b-a8577292bbf6\") " Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.462617 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s79c7\" (UniqueName: \"kubernetes.io/projected/cb41d90e-0896-4229-a19b-a8577292bbf6-kube-api-access-s79c7\") pod \"cb41d90e-0896-4229-a19b-a8577292bbf6\" (UID: \"cb41d90e-0896-4229-a19b-a8577292bbf6\") " Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.473518 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb41d90e-0896-4229-a19b-a8577292bbf6-utilities" (OuterVolumeSpecName: "utilities") pod "cb41d90e-0896-4229-a19b-a8577292bbf6" (UID: "cb41d90e-0896-4229-a19b-a8577292bbf6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.475568 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb41d90e-0896-4229-a19b-a8577292bbf6-kube-api-access-s79c7" (OuterVolumeSpecName: "kube-api-access-s79c7") pod "cb41d90e-0896-4229-a19b-a8577292bbf6" (UID: "cb41d90e-0896-4229-a19b-a8577292bbf6"). InnerVolumeSpecName "kube-api-access-s79c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.565009 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb41d90e-0896-4229-a19b-a8577292bbf6-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.565040 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s79c7\" (UniqueName: \"kubernetes.io/projected/cb41d90e-0896-4229-a19b-a8577292bbf6-kube-api-access-s79c7\") on node \"crc\" DevicePath \"\"" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.601680 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb41d90e-0896-4229-a19b-a8577292bbf6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb41d90e-0896-4229-a19b-a8577292bbf6" (UID: "cb41d90e-0896-4229-a19b-a8577292bbf6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.631877 4810 generic.go:334] "Generic (PLEG): container finished" podID="cb41d90e-0896-4229-a19b-a8577292bbf6" containerID="4dcb06a25fb7e17e0617e078d6ec18aa0f057a2f73d88aa7a8ec24cf486c4034" exitCode=0 Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.631931 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzkwp" event={"ID":"cb41d90e-0896-4229-a19b-a8577292bbf6","Type":"ContainerDied","Data":"4dcb06a25fb7e17e0617e078d6ec18aa0f057a2f73d88aa7a8ec24cf486c4034"} Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.631996 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzkwp" event={"ID":"cb41d90e-0896-4229-a19b-a8577292bbf6","Type":"ContainerDied","Data":"359d44fe31f278162e80c02cff4ce2bbbb87a6a8b85fc0045e405dd62ec98267"} Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.632015 4810 scope.go:117] "RemoveContainer" containerID="4dcb06a25fb7e17e0617e078d6ec18aa0f057a2f73d88aa7a8ec24cf486c4034" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.632381 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzkwp" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.654525 4810 scope.go:117] "RemoveContainer" containerID="563ec001d1590fa9c16e3de9515c66b2a8327d12667295252c48a5230c733ba1" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.665986 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb41d90e-0896-4229-a19b-a8577292bbf6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.672973 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gzkwp"] Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.682748 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gzkwp"] Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.704730 4810 scope.go:117] "RemoveContainer" containerID="446909cef9363d01f975cdf4a69b340a8cfb1e1ae9bddb9ba40e5d706fc6a797" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.725816 4810 scope.go:117] "RemoveContainer" containerID="4dcb06a25fb7e17e0617e078d6ec18aa0f057a2f73d88aa7a8ec24cf486c4034" Feb 19 16:18:40 crc kubenswrapper[4810]: E0219 16:18:40.726281 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dcb06a25fb7e17e0617e078d6ec18aa0f057a2f73d88aa7a8ec24cf486c4034\": container with ID starting with 4dcb06a25fb7e17e0617e078d6ec18aa0f057a2f73d88aa7a8ec24cf486c4034 not found: ID does not exist" containerID="4dcb06a25fb7e17e0617e078d6ec18aa0f057a2f73d88aa7a8ec24cf486c4034" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.726369 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dcb06a25fb7e17e0617e078d6ec18aa0f057a2f73d88aa7a8ec24cf486c4034"} err="failed to get container status \"4dcb06a25fb7e17e0617e078d6ec18aa0f057a2f73d88aa7a8ec24cf486c4034\": rpc error: code = NotFound desc = could not find container \"4dcb06a25fb7e17e0617e078d6ec18aa0f057a2f73d88aa7a8ec24cf486c4034\": container with ID starting with 4dcb06a25fb7e17e0617e078d6ec18aa0f057a2f73d88aa7a8ec24cf486c4034 not found: ID does not exist" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.726399 4810 scope.go:117] "RemoveContainer" containerID="563ec001d1590fa9c16e3de9515c66b2a8327d12667295252c48a5230c733ba1" Feb 19 16:18:40 crc kubenswrapper[4810]: E0219 16:18:40.726686 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"563ec001d1590fa9c16e3de9515c66b2a8327d12667295252c48a5230c733ba1\": container with ID starting with 563ec001d1590fa9c16e3de9515c66b2a8327d12667295252c48a5230c733ba1 not found: ID does not exist" containerID="563ec001d1590fa9c16e3de9515c66b2a8327d12667295252c48a5230c733ba1" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.726715 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"563ec001d1590fa9c16e3de9515c66b2a8327d12667295252c48a5230c733ba1"} err="failed to get container status \"563ec001d1590fa9c16e3de9515c66b2a8327d12667295252c48a5230c733ba1\": rpc error: code = NotFound desc = could not find container \"563ec001d1590fa9c16e3de9515c66b2a8327d12667295252c48a5230c733ba1\": container with ID starting with 563ec001d1590fa9c16e3de9515c66b2a8327d12667295252c48a5230c733ba1 not found: ID does not exist" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.726735 4810 scope.go:117] "RemoveContainer" containerID="446909cef9363d01f975cdf4a69b340a8cfb1e1ae9bddb9ba40e5d706fc6a797" Feb 19 16:18:40 crc kubenswrapper[4810]: E0219 16:18:40.726930 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"446909cef9363d01f975cdf4a69b340a8cfb1e1ae9bddb9ba40e5d706fc6a797\": container with ID starting with 446909cef9363d01f975cdf4a69b340a8cfb1e1ae9bddb9ba40e5d706fc6a797 not found: ID does not exist" containerID="446909cef9363d01f975cdf4a69b340a8cfb1e1ae9bddb9ba40e5d706fc6a797" Feb 19 16:18:40 crc kubenswrapper[4810]: I0219 16:18:40.726949 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"446909cef9363d01f975cdf4a69b340a8cfb1e1ae9bddb9ba40e5d706fc6a797"} err="failed to get container status \"446909cef9363d01f975cdf4a69b340a8cfb1e1ae9bddb9ba40e5d706fc6a797\": rpc error: code = NotFound desc = could not find container \"446909cef9363d01f975cdf4a69b340a8cfb1e1ae9bddb9ba40e5d706fc6a797\": container with ID starting with 446909cef9363d01f975cdf4a69b340a8cfb1e1ae9bddb9ba40e5d706fc6a797 not found: ID does not exist" Feb 19 16:18:41 crc kubenswrapper[4810]: I0219 16:18:41.456051 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb41d90e-0896-4229-a19b-a8577292bbf6" path="/var/lib/kubelet/pods/cb41d90e-0896-4229-a19b-a8577292bbf6/volumes" Feb 19 16:18:49 crc kubenswrapper[4810]: I0219 16:18:49.538289 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:18:49 crc kubenswrapper[4810]: I0219 16:18:49.539087 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:18:49 crc kubenswrapper[4810]: I0219 16:18:49.539161 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 16:18:49 crc kubenswrapper[4810]: I0219 16:18:49.540367 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd8031bab4ace844a0a95e2a4ed00eb2a49420bcc364c28753fbe8146e7a2fd5"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 16:18:49 crc kubenswrapper[4810]: I0219 16:18:49.540479 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://dd8031bab4ace844a0a95e2a4ed00eb2a49420bcc364c28753fbe8146e7a2fd5" gracePeriod=600 Feb 19 16:18:49 crc kubenswrapper[4810]: I0219 16:18:49.733167 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="dd8031bab4ace844a0a95e2a4ed00eb2a49420bcc364c28753fbe8146e7a2fd5" exitCode=0 Feb 19 16:18:49 crc kubenswrapper[4810]: I0219 16:18:49.733215 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"dd8031bab4ace844a0a95e2a4ed00eb2a49420bcc364c28753fbe8146e7a2fd5"} Feb 19 16:18:49 crc kubenswrapper[4810]: I0219 16:18:49.733258 4810 scope.go:117] "RemoveContainer" containerID="8520d3639f4b062a60a4e8d48690a9ddf9c8f0411d765a4be05ac76b8b0faa8e" Feb 19 16:18:50 crc kubenswrapper[4810]: I0219 16:18:50.761686 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6"} Feb 19 16:20:49 crc kubenswrapper[4810]: I0219 16:20:49.537793 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:20:49 crc kubenswrapper[4810]: I0219 16:20:49.538640 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:21:19 crc kubenswrapper[4810]: I0219 16:21:19.537516 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:21:19 crc kubenswrapper[4810]: I0219 16:21:19.538137 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:21:49 crc kubenswrapper[4810]: I0219 16:21:49.537189 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:21:49 crc kubenswrapper[4810]: I0219 16:21:49.537785 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:21:49 crc kubenswrapper[4810]: I0219 16:21:49.537831 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 16:21:49 crc kubenswrapper[4810]: I0219 16:21:49.538625 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 16:21:49 crc kubenswrapper[4810]: I0219 16:21:49.538684 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" gracePeriod=600 Feb 19 16:21:49 crc kubenswrapper[4810]: E0219 16:21:49.739365 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:21:49 crc kubenswrapper[4810]: I0219 16:21:49.871999 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" exitCode=0 Feb 19 16:21:49 crc kubenswrapper[4810]: I0219 16:21:49.872241 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6"} Feb 19 16:21:49 crc kubenswrapper[4810]: I0219 16:21:49.872305 4810 scope.go:117] "RemoveContainer" containerID="dd8031bab4ace844a0a95e2a4ed00eb2a49420bcc364c28753fbe8146e7a2fd5" Feb 19 16:21:49 crc kubenswrapper[4810]: I0219 16:21:49.872961 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:21:49 crc kubenswrapper[4810]: E0219 16:21:49.873532 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:22:02 crc kubenswrapper[4810]: I0219 16:22:02.439522 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:22:02 crc kubenswrapper[4810]: E0219 16:22:02.440432 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:22:14 crc kubenswrapper[4810]: I0219 16:22:14.439916 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:22:14 crc kubenswrapper[4810]: E0219 16:22:14.440873 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:22:27 crc kubenswrapper[4810]: I0219 16:22:27.439977 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:22:27 crc kubenswrapper[4810]: E0219 16:22:27.440718 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:22:39 crc kubenswrapper[4810]: I0219 16:22:39.439661 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:22:39 crc kubenswrapper[4810]: E0219 16:22:39.440886 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:22:51 crc kubenswrapper[4810]: I0219 16:22:51.451922 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:22:51 crc kubenswrapper[4810]: E0219 16:22:51.453062 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:23:06 crc kubenswrapper[4810]: I0219 16:23:06.441100 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:23:06 crc kubenswrapper[4810]: E0219 16:23:06.441868 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:23:19 crc kubenswrapper[4810]: I0219 16:23:19.439803 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:23:19 crc kubenswrapper[4810]: E0219 16:23:19.440826 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:23:33 crc kubenswrapper[4810]: I0219 16:23:33.440206 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:23:33 crc kubenswrapper[4810]: E0219 16:23:33.441312 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:23:45 crc kubenswrapper[4810]: I0219 16:23:45.438960 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:23:45 crc kubenswrapper[4810]: E0219 16:23:45.439856 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:23:56 crc kubenswrapper[4810]: I0219 16:23:56.439966 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:23:56 crc kubenswrapper[4810]: E0219 16:23:56.443538 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:24:10 crc kubenswrapper[4810]: I0219 16:24:10.442676 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:24:10 crc kubenswrapper[4810]: E0219 16:24:10.443555 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:24:22 crc kubenswrapper[4810]: I0219 16:24:22.440253 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:24:22 crc kubenswrapper[4810]: E0219 16:24:22.441309 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:24:37 crc kubenswrapper[4810]: I0219 16:24:37.440426 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:24:37 crc kubenswrapper[4810]: E0219 16:24:37.441256 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:24:52 crc kubenswrapper[4810]: E0219 16:24:52.096855 4810 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.162:51034->38.102.83.162:41765: write tcp 38.102.83.162:51034->38.102.83.162:41765: write: connection reset by peer Feb 19 16:24:52 crc kubenswrapper[4810]: I0219 16:24:52.440547 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:24:52 crc kubenswrapper[4810]: E0219 16:24:52.441074 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:25:05 crc kubenswrapper[4810]: I0219 16:25:05.456374 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:25:05 crc kubenswrapper[4810]: E0219 16:25:05.465529 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:25:19 crc kubenswrapper[4810]: I0219 16:25:19.440232 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:25:19 crc kubenswrapper[4810]: E0219 16:25:19.441053 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:25:31 crc kubenswrapper[4810]: I0219 16:25:31.446621 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:25:31 crc kubenswrapper[4810]: E0219 16:25:31.447426 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:25:46 crc kubenswrapper[4810]: I0219 16:25:46.440113 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:25:46 crc kubenswrapper[4810]: E0219 16:25:46.441509 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:25:57 crc kubenswrapper[4810]: I0219 16:25:57.439402 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:25:57 crc kubenswrapper[4810]: E0219 16:25:57.440047 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:26:08 crc kubenswrapper[4810]: I0219 16:26:08.439670 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:26:08 crc kubenswrapper[4810]: E0219 16:26:08.440400 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:26:22 crc kubenswrapper[4810]: I0219 16:26:22.440410 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:26:22 crc kubenswrapper[4810]: E0219 16:26:22.441667 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:26:35 crc kubenswrapper[4810]: I0219 16:26:35.440490 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:26:35 crc kubenswrapper[4810]: E0219 16:26:35.442596 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:26:47 crc kubenswrapper[4810]: I0219 16:26:47.441501 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:26:47 crc kubenswrapper[4810]: E0219 16:26:47.442866 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:27:00 crc kubenswrapper[4810]: I0219 16:27:00.440507 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:27:00 crc kubenswrapper[4810]: I0219 16:27:00.795810 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"61e1161022488800f1a95fde17c57eb209b943fcdd172d04938731e0ad552ce2"} Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.446025 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-7rbxk" podUID="66c7e596-ffa3-4687-8c80-21acecbd8075" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.635560 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-64pwn"] Feb 19 16:28:11 crc kubenswrapper[4810]: E0219 16:28:11.636057 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18dc9098-5a5e-438e-88d7-b611f88e7e56" containerName="extract-utilities" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.636079 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="18dc9098-5a5e-438e-88d7-b611f88e7e56" containerName="extract-utilities" Feb 19 16:28:11 crc kubenswrapper[4810]: E0219 16:28:11.636102 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18dc9098-5a5e-438e-88d7-b611f88e7e56" containerName="extract-content" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.636113 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="18dc9098-5a5e-438e-88d7-b611f88e7e56" containerName="extract-content" Feb 19 16:28:11 crc kubenswrapper[4810]: E0219 16:28:11.636143 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18dc9098-5a5e-438e-88d7-b611f88e7e56" containerName="registry-server" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.636150 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="18dc9098-5a5e-438e-88d7-b611f88e7e56" containerName="registry-server" Feb 19 16:28:11 crc kubenswrapper[4810]: E0219 16:28:11.636161 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb41d90e-0896-4229-a19b-a8577292bbf6" containerName="registry-server" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.636192 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb41d90e-0896-4229-a19b-a8577292bbf6" containerName="registry-server" Feb 19 16:28:11 crc kubenswrapper[4810]: E0219 16:28:11.636206 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb41d90e-0896-4229-a19b-a8577292bbf6" containerName="extract-content" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.636214 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb41d90e-0896-4229-a19b-a8577292bbf6" containerName="extract-content" Feb 19 16:28:11 crc kubenswrapper[4810]: E0219 16:28:11.636242 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb41d90e-0896-4229-a19b-a8577292bbf6" containerName="extract-utilities" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.636249 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb41d90e-0896-4229-a19b-a8577292bbf6" containerName="extract-utilities" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.636504 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="18dc9098-5a5e-438e-88d7-b611f88e7e56" containerName="registry-server" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.636522 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb41d90e-0896-4229-a19b-a8577292bbf6" containerName="registry-server" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.638266 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.658890 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-64pwn"] Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.768801 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gnbp\" (UniqueName: \"kubernetes.io/projected/9e626cbc-4140-4c84-8ecc-8b3315f6023d-kube-api-access-6gnbp\") pod \"redhat-operators-64pwn\" (UID: \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\") " pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.768946 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e626cbc-4140-4c84-8ecc-8b3315f6023d-catalog-content\") pod \"redhat-operators-64pwn\" (UID: \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\") " pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.769000 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e626cbc-4140-4c84-8ecc-8b3315f6023d-utilities\") pod \"redhat-operators-64pwn\" (UID: \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\") " pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.871003 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e626cbc-4140-4c84-8ecc-8b3315f6023d-catalog-content\") pod \"redhat-operators-64pwn\" (UID: \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\") " pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.871266 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e626cbc-4140-4c84-8ecc-8b3315f6023d-utilities\") pod \"redhat-operators-64pwn\" (UID: \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\") " pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.871543 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gnbp\" (UniqueName: \"kubernetes.io/projected/9e626cbc-4140-4c84-8ecc-8b3315f6023d-kube-api-access-6gnbp\") pod \"redhat-operators-64pwn\" (UID: \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\") " pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.871651 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e626cbc-4140-4c84-8ecc-8b3315f6023d-catalog-content\") pod \"redhat-operators-64pwn\" (UID: \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\") " pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.871723 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e626cbc-4140-4c84-8ecc-8b3315f6023d-utilities\") pod \"redhat-operators-64pwn\" (UID: \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\") " pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.897157 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gnbp\" (UniqueName: \"kubernetes.io/projected/9e626cbc-4140-4c84-8ecc-8b3315f6023d-kube-api-access-6gnbp\") pod \"redhat-operators-64pwn\" (UID: \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\") " pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:11 crc kubenswrapper[4810]: I0219 16:28:11.970318 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:12 crc kubenswrapper[4810]: I0219 16:28:12.616656 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-64pwn"] Feb 19 16:28:13 crc kubenswrapper[4810]: I0219 16:28:13.439596 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerID="ca5d92eef27bd3b021224c9164b7e74cb5a78a5abd43eb0d862a50d3ae8cd3f9" exitCode=0 Feb 19 16:28:13 crc kubenswrapper[4810]: I0219 16:28:13.446290 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 16:28:13 crc kubenswrapper[4810]: I0219 16:28:13.461076 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64pwn" event={"ID":"9e626cbc-4140-4c84-8ecc-8b3315f6023d","Type":"ContainerDied","Data":"ca5d92eef27bd3b021224c9164b7e74cb5a78a5abd43eb0d862a50d3ae8cd3f9"} Feb 19 16:28:13 crc kubenswrapper[4810]: I0219 16:28:13.461577 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64pwn" event={"ID":"9e626cbc-4140-4c84-8ecc-8b3315f6023d","Type":"ContainerStarted","Data":"565952d67109a9fcad8e71d5e0aabde77b113979aa3c289f2b682f3ba5f316a6"} Feb 19 16:28:15 crc kubenswrapper[4810]: I0219 16:28:15.463391 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64pwn" event={"ID":"9e626cbc-4140-4c84-8ecc-8b3315f6023d","Type":"ContainerStarted","Data":"11aae31930dc1715ed8dc4d319b46478d41e580c6f80c8f65ea1a3331f2c975b"} Feb 19 16:28:19 crc kubenswrapper[4810]: I0219 16:28:19.519750 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerID="11aae31930dc1715ed8dc4d319b46478d41e580c6f80c8f65ea1a3331f2c975b" exitCode=0 Feb 19 16:28:19 crc kubenswrapper[4810]: I0219 16:28:19.519832 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64pwn" event={"ID":"9e626cbc-4140-4c84-8ecc-8b3315f6023d","Type":"ContainerDied","Data":"11aae31930dc1715ed8dc4d319b46478d41e580c6f80c8f65ea1a3331f2c975b"} Feb 19 16:28:20 crc kubenswrapper[4810]: I0219 16:28:20.531919 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64pwn" event={"ID":"9e626cbc-4140-4c84-8ecc-8b3315f6023d","Type":"ContainerStarted","Data":"81668e94dc20932a1366bc891e92da4a77667055ce44a21f7d966c87ad5627e7"} Feb 19 16:28:20 crc kubenswrapper[4810]: I0219 16:28:20.550878 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-64pwn" podStartSLOduration=3.039722839 podStartE2EDuration="9.55086217s" podCreationTimestamp="2026-02-19 16:28:11 +0000 UTC" firstStartedPulling="2026-02-19 16:28:13.445911345 +0000 UTC m=+4722.927941499" lastFinishedPulling="2026-02-19 16:28:19.957050706 +0000 UTC m=+4729.439080830" observedRunningTime="2026-02-19 16:28:20.549202959 +0000 UTC m=+4730.031233083" watchObservedRunningTime="2026-02-19 16:28:20.55086217 +0000 UTC m=+4730.032892294" Feb 19 16:28:20 crc kubenswrapper[4810]: I0219 16:28:20.786065 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mjjlb"] Feb 19 16:28:20 crc kubenswrapper[4810]: I0219 16:28:20.790012 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:20 crc kubenswrapper[4810]: I0219 16:28:20.804764 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mjjlb"] Feb 19 16:28:20 crc kubenswrapper[4810]: I0219 16:28:20.982432 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb4ac5e-9779-4e63-8008-83368b12aea5-catalog-content\") pod \"community-operators-mjjlb\" (UID: \"eeb4ac5e-9779-4e63-8008-83368b12aea5\") " pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:20 crc kubenswrapper[4810]: I0219 16:28:20.982861 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncnnf\" (UniqueName: \"kubernetes.io/projected/eeb4ac5e-9779-4e63-8008-83368b12aea5-kube-api-access-ncnnf\") pod \"community-operators-mjjlb\" (UID: \"eeb4ac5e-9779-4e63-8008-83368b12aea5\") " pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:20 crc kubenswrapper[4810]: I0219 16:28:20.983025 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb4ac5e-9779-4e63-8008-83368b12aea5-utilities\") pod \"community-operators-mjjlb\" (UID: \"eeb4ac5e-9779-4e63-8008-83368b12aea5\") " pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:21 crc kubenswrapper[4810]: I0219 16:28:21.084831 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb4ac5e-9779-4e63-8008-83368b12aea5-utilities\") pod \"community-operators-mjjlb\" (UID: \"eeb4ac5e-9779-4e63-8008-83368b12aea5\") " pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:21 crc kubenswrapper[4810]: I0219 16:28:21.084945 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb4ac5e-9779-4e63-8008-83368b12aea5-catalog-content\") pod \"community-operators-mjjlb\" (UID: \"eeb4ac5e-9779-4e63-8008-83368b12aea5\") " pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:21 crc kubenswrapper[4810]: I0219 16:28:21.084989 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncnnf\" (UniqueName: \"kubernetes.io/projected/eeb4ac5e-9779-4e63-8008-83368b12aea5-kube-api-access-ncnnf\") pod \"community-operators-mjjlb\" (UID: \"eeb4ac5e-9779-4e63-8008-83368b12aea5\") " pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:21 crc kubenswrapper[4810]: I0219 16:28:21.085747 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb4ac5e-9779-4e63-8008-83368b12aea5-utilities\") pod \"community-operators-mjjlb\" (UID: \"eeb4ac5e-9779-4e63-8008-83368b12aea5\") " pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:21 crc kubenswrapper[4810]: I0219 16:28:21.085783 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb4ac5e-9779-4e63-8008-83368b12aea5-catalog-content\") pod \"community-operators-mjjlb\" (UID: \"eeb4ac5e-9779-4e63-8008-83368b12aea5\") " pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:21 crc kubenswrapper[4810]: I0219 16:28:21.108268 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncnnf\" (UniqueName: \"kubernetes.io/projected/eeb4ac5e-9779-4e63-8008-83368b12aea5-kube-api-access-ncnnf\") pod \"community-operators-mjjlb\" (UID: \"eeb4ac5e-9779-4e63-8008-83368b12aea5\") " pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:21 crc kubenswrapper[4810]: I0219 16:28:21.126026 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:21 crc kubenswrapper[4810]: W0219 16:28:21.651104 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeeb4ac5e_9779_4e63_8008_83368b12aea5.slice/crio-b08f75ea1a51a0539020767f2a50472c00e5d58cf749563300c8504c2411cae9 WatchSource:0}: Error finding container b08f75ea1a51a0539020767f2a50472c00e5d58cf749563300c8504c2411cae9: Status 404 returned error can't find the container with id b08f75ea1a51a0539020767f2a50472c00e5d58cf749563300c8504c2411cae9 Feb 19 16:28:21 crc kubenswrapper[4810]: I0219 16:28:21.659516 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mjjlb"] Feb 19 16:28:21 crc kubenswrapper[4810]: I0219 16:28:21.970872 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:21 crc kubenswrapper[4810]: I0219 16:28:21.971194 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:22 crc kubenswrapper[4810]: I0219 16:28:22.552824 4810 generic.go:334] "Generic (PLEG): container finished" podID="eeb4ac5e-9779-4e63-8008-83368b12aea5" containerID="1d64cbe5478a9ad4d9cd763d9f5b1175310a03b2d98326cddede9d7757669756" exitCode=0 Feb 19 16:28:22 crc kubenswrapper[4810]: I0219 16:28:22.552890 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjjlb" event={"ID":"eeb4ac5e-9779-4e63-8008-83368b12aea5","Type":"ContainerDied","Data":"1d64cbe5478a9ad4d9cd763d9f5b1175310a03b2d98326cddede9d7757669756"} Feb 19 16:28:22 crc kubenswrapper[4810]: I0219 16:28:22.552930 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjjlb" event={"ID":"eeb4ac5e-9779-4e63-8008-83368b12aea5","Type":"ContainerStarted","Data":"b08f75ea1a51a0539020767f2a50472c00e5d58cf749563300c8504c2411cae9"} Feb 19 16:28:23 crc kubenswrapper[4810]: I0219 16:28:23.038244 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-64pwn" podUID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerName="registry-server" probeResult="failure" output=< Feb 19 16:28:23 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 16:28:23 crc kubenswrapper[4810]: > Feb 19 16:28:24 crc kubenswrapper[4810]: I0219 16:28:24.632182 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjjlb" event={"ID":"eeb4ac5e-9779-4e63-8008-83368b12aea5","Type":"ContainerStarted","Data":"15ccceaa5002f7c88549c7fabba52cc9a89e3ef2a9a0f0d670c55de44651b1cf"} Feb 19 16:28:25 crc kubenswrapper[4810]: I0219 16:28:25.646428 4810 generic.go:334] "Generic (PLEG): container finished" podID="eeb4ac5e-9779-4e63-8008-83368b12aea5" containerID="15ccceaa5002f7c88549c7fabba52cc9a89e3ef2a9a0f0d670c55de44651b1cf" exitCode=0 Feb 19 16:28:25 crc kubenswrapper[4810]: I0219 16:28:25.646527 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjjlb" event={"ID":"eeb4ac5e-9779-4e63-8008-83368b12aea5","Type":"ContainerDied","Data":"15ccceaa5002f7c88549c7fabba52cc9a89e3ef2a9a0f0d670c55de44651b1cf"} Feb 19 16:28:26 crc kubenswrapper[4810]: I0219 16:28:26.662100 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjjlb" event={"ID":"eeb4ac5e-9779-4e63-8008-83368b12aea5","Type":"ContainerStarted","Data":"346725c8d42f0e9ac0d8ff2060dd270f9f2549bd03d2d4245c0e6fb675bfcb27"} Feb 19 16:28:26 crc kubenswrapper[4810]: I0219 16:28:26.683484 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mjjlb" podStartSLOduration=3.157034551 podStartE2EDuration="6.683465084s" podCreationTimestamp="2026-02-19 16:28:20 +0000 UTC" firstStartedPulling="2026-02-19 16:28:22.555186688 +0000 UTC m=+4732.037216852" lastFinishedPulling="2026-02-19 16:28:26.081617221 +0000 UTC m=+4735.563647385" observedRunningTime="2026-02-19 16:28:26.680974313 +0000 UTC m=+4736.163004447" watchObservedRunningTime="2026-02-19 16:28:26.683465084 +0000 UTC m=+4736.165495218" Feb 19 16:28:31 crc kubenswrapper[4810]: I0219 16:28:31.127062 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:31 crc kubenswrapper[4810]: I0219 16:28:31.127761 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:31 crc kubenswrapper[4810]: I0219 16:28:31.201448 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:31 crc kubenswrapper[4810]: I0219 16:28:31.803950 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:31 crc kubenswrapper[4810]: I0219 16:28:31.861037 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mjjlb"] Feb 19 16:28:33 crc kubenswrapper[4810]: I0219 16:28:33.038227 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-64pwn" podUID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerName="registry-server" probeResult="failure" output=< Feb 19 16:28:33 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 16:28:33 crc kubenswrapper[4810]: > Feb 19 16:28:33 crc kubenswrapper[4810]: I0219 16:28:33.737318 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mjjlb" podUID="eeb4ac5e-9779-4e63-8008-83368b12aea5" containerName="registry-server" containerID="cri-o://346725c8d42f0e9ac0d8ff2060dd270f9f2549bd03d2d4245c0e6fb675bfcb27" gracePeriod=2 Feb 19 16:28:33 crc kubenswrapper[4810]: I0219 16:28:33.895794 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5n9gc"] Feb 19 16:28:33 crc kubenswrapper[4810]: I0219 16:28:33.899296 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:33 crc kubenswrapper[4810]: I0219 16:28:33.911502 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dde1ea5-68be-4851-8816-3c7302dc2579-catalog-content\") pod \"certified-operators-5n9gc\" (UID: \"6dde1ea5-68be-4851-8816-3c7302dc2579\") " pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:33 crc kubenswrapper[4810]: I0219 16:28:33.911645 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j8ln\" (UniqueName: \"kubernetes.io/projected/6dde1ea5-68be-4851-8816-3c7302dc2579-kube-api-access-5j8ln\") pod \"certified-operators-5n9gc\" (UID: \"6dde1ea5-68be-4851-8816-3c7302dc2579\") " pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:33 crc kubenswrapper[4810]: I0219 16:28:33.911677 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dde1ea5-68be-4851-8816-3c7302dc2579-utilities\") pod \"certified-operators-5n9gc\" (UID: \"6dde1ea5-68be-4851-8816-3c7302dc2579\") " pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:33 crc kubenswrapper[4810]: I0219 16:28:33.911769 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5n9gc"] Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.013251 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j8ln\" (UniqueName: \"kubernetes.io/projected/6dde1ea5-68be-4851-8816-3c7302dc2579-kube-api-access-5j8ln\") pod \"certified-operators-5n9gc\" (UID: \"6dde1ea5-68be-4851-8816-3c7302dc2579\") " pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.013296 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dde1ea5-68be-4851-8816-3c7302dc2579-utilities\") pod \"certified-operators-5n9gc\" (UID: \"6dde1ea5-68be-4851-8816-3c7302dc2579\") " pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.013435 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dde1ea5-68be-4851-8816-3c7302dc2579-catalog-content\") pod \"certified-operators-5n9gc\" (UID: \"6dde1ea5-68be-4851-8816-3c7302dc2579\") " pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.013867 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dde1ea5-68be-4851-8816-3c7302dc2579-utilities\") pod \"certified-operators-5n9gc\" (UID: \"6dde1ea5-68be-4851-8816-3c7302dc2579\") " pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.014364 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dde1ea5-68be-4851-8816-3c7302dc2579-catalog-content\") pod \"certified-operators-5n9gc\" (UID: \"6dde1ea5-68be-4851-8816-3c7302dc2579\") " pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.035180 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j8ln\" (UniqueName: \"kubernetes.io/projected/6dde1ea5-68be-4851-8816-3c7302dc2579-kube-api-access-5j8ln\") pod \"certified-operators-5n9gc\" (UID: \"6dde1ea5-68be-4851-8816-3c7302dc2579\") " pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.252928 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.316994 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.532797 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb4ac5e-9779-4e63-8008-83368b12aea5-utilities\") pod \"eeb4ac5e-9779-4e63-8008-83368b12aea5\" (UID: \"eeb4ac5e-9779-4e63-8008-83368b12aea5\") " Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.533162 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb4ac5e-9779-4e63-8008-83368b12aea5-catalog-content\") pod \"eeb4ac5e-9779-4e63-8008-83368b12aea5\" (UID: \"eeb4ac5e-9779-4e63-8008-83368b12aea5\") " Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.533304 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncnnf\" (UniqueName: \"kubernetes.io/projected/eeb4ac5e-9779-4e63-8008-83368b12aea5-kube-api-access-ncnnf\") pod \"eeb4ac5e-9779-4e63-8008-83368b12aea5\" (UID: \"eeb4ac5e-9779-4e63-8008-83368b12aea5\") " Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.534210 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeb4ac5e-9779-4e63-8008-83368b12aea5-utilities" (OuterVolumeSpecName: "utilities") pod "eeb4ac5e-9779-4e63-8008-83368b12aea5" (UID: "eeb4ac5e-9779-4e63-8008-83368b12aea5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.534669 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb4ac5e-9779-4e63-8008-83368b12aea5-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.544891 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb4ac5e-9779-4e63-8008-83368b12aea5-kube-api-access-ncnnf" (OuterVolumeSpecName: "kube-api-access-ncnnf") pod "eeb4ac5e-9779-4e63-8008-83368b12aea5" (UID: "eeb4ac5e-9779-4e63-8008-83368b12aea5"). InnerVolumeSpecName "kube-api-access-ncnnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.628948 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeb4ac5e-9779-4e63-8008-83368b12aea5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eeb4ac5e-9779-4e63-8008-83368b12aea5" (UID: "eeb4ac5e-9779-4e63-8008-83368b12aea5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.641101 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb4ac5e-9779-4e63-8008-83368b12aea5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.641449 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncnnf\" (UniqueName: \"kubernetes.io/projected/eeb4ac5e-9779-4e63-8008-83368b12aea5-kube-api-access-ncnnf\") on node \"crc\" DevicePath \"\"" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.751949 4810 generic.go:334] "Generic (PLEG): container finished" podID="eeb4ac5e-9779-4e63-8008-83368b12aea5" containerID="346725c8d42f0e9ac0d8ff2060dd270f9f2549bd03d2d4245c0e6fb675bfcb27" exitCode=0 Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.752010 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjjlb" event={"ID":"eeb4ac5e-9779-4e63-8008-83368b12aea5","Type":"ContainerDied","Data":"346725c8d42f0e9ac0d8ff2060dd270f9f2549bd03d2d4245c0e6fb675bfcb27"} Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.752063 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjjlb" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.752265 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjjlb" event={"ID":"eeb4ac5e-9779-4e63-8008-83368b12aea5","Type":"ContainerDied","Data":"b08f75ea1a51a0539020767f2a50472c00e5d58cf749563300c8504c2411cae9"} Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.752409 4810 scope.go:117] "RemoveContainer" containerID="346725c8d42f0e9ac0d8ff2060dd270f9f2549bd03d2d4245c0e6fb675bfcb27" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.781480 4810 scope.go:117] "RemoveContainer" containerID="15ccceaa5002f7c88549c7fabba52cc9a89e3ef2a9a0f0d670c55de44651b1cf" Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.802398 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mjjlb"] Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.807952 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mjjlb"] Feb 19 16:28:34 crc kubenswrapper[4810]: I0219 16:28:34.833125 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5n9gc"] Feb 19 16:28:35 crc kubenswrapper[4810]: W0219 16:28:35.114757 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dde1ea5_68be_4851_8816_3c7302dc2579.slice/crio-280973c57a3f46eeb314d42cd9ad11d9e2b63939b685ed7e15c043b29db4262c WatchSource:0}: Error finding container 280973c57a3f46eeb314d42cd9ad11d9e2b63939b685ed7e15c043b29db4262c: Status 404 returned error can't find the container with id 280973c57a3f46eeb314d42cd9ad11d9e2b63939b685ed7e15c043b29db4262c Feb 19 16:28:35 crc kubenswrapper[4810]: I0219 16:28:35.126717 4810 scope.go:117] "RemoveContainer" containerID="1d64cbe5478a9ad4d9cd763d9f5b1175310a03b2d98326cddede9d7757669756" Feb 19 16:28:35 crc kubenswrapper[4810]: I0219 16:28:35.274648 4810 scope.go:117] "RemoveContainer" containerID="346725c8d42f0e9ac0d8ff2060dd270f9f2549bd03d2d4245c0e6fb675bfcb27" Feb 19 16:28:35 crc kubenswrapper[4810]: E0219 16:28:35.275117 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"346725c8d42f0e9ac0d8ff2060dd270f9f2549bd03d2d4245c0e6fb675bfcb27\": container with ID starting with 346725c8d42f0e9ac0d8ff2060dd270f9f2549bd03d2d4245c0e6fb675bfcb27 not found: ID does not exist" containerID="346725c8d42f0e9ac0d8ff2060dd270f9f2549bd03d2d4245c0e6fb675bfcb27" Feb 19 16:28:35 crc kubenswrapper[4810]: I0219 16:28:35.275153 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"346725c8d42f0e9ac0d8ff2060dd270f9f2549bd03d2d4245c0e6fb675bfcb27"} err="failed to get container status \"346725c8d42f0e9ac0d8ff2060dd270f9f2549bd03d2d4245c0e6fb675bfcb27\": rpc error: code = NotFound desc = could not find container \"346725c8d42f0e9ac0d8ff2060dd270f9f2549bd03d2d4245c0e6fb675bfcb27\": container with ID starting with 346725c8d42f0e9ac0d8ff2060dd270f9f2549bd03d2d4245c0e6fb675bfcb27 not found: ID does not exist" Feb 19 16:28:35 crc kubenswrapper[4810]: I0219 16:28:35.275180 4810 scope.go:117] "RemoveContainer" containerID="15ccceaa5002f7c88549c7fabba52cc9a89e3ef2a9a0f0d670c55de44651b1cf" Feb 19 16:28:35 crc kubenswrapper[4810]: E0219 16:28:35.275552 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15ccceaa5002f7c88549c7fabba52cc9a89e3ef2a9a0f0d670c55de44651b1cf\": container with ID starting with 15ccceaa5002f7c88549c7fabba52cc9a89e3ef2a9a0f0d670c55de44651b1cf not found: ID does not exist" containerID="15ccceaa5002f7c88549c7fabba52cc9a89e3ef2a9a0f0d670c55de44651b1cf" Feb 19 16:28:35 crc kubenswrapper[4810]: I0219 16:28:35.275567 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15ccceaa5002f7c88549c7fabba52cc9a89e3ef2a9a0f0d670c55de44651b1cf"} err="failed to get container status \"15ccceaa5002f7c88549c7fabba52cc9a89e3ef2a9a0f0d670c55de44651b1cf\": rpc error: code = NotFound desc = could not find container \"15ccceaa5002f7c88549c7fabba52cc9a89e3ef2a9a0f0d670c55de44651b1cf\": container with ID starting with 15ccceaa5002f7c88549c7fabba52cc9a89e3ef2a9a0f0d670c55de44651b1cf not found: ID does not exist" Feb 19 16:28:35 crc kubenswrapper[4810]: I0219 16:28:35.275582 4810 scope.go:117] "RemoveContainer" containerID="1d64cbe5478a9ad4d9cd763d9f5b1175310a03b2d98326cddede9d7757669756" Feb 19 16:28:35 crc kubenswrapper[4810]: E0219 16:28:35.275806 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d64cbe5478a9ad4d9cd763d9f5b1175310a03b2d98326cddede9d7757669756\": container with ID starting with 1d64cbe5478a9ad4d9cd763d9f5b1175310a03b2d98326cddede9d7757669756 not found: ID does not exist" containerID="1d64cbe5478a9ad4d9cd763d9f5b1175310a03b2d98326cddede9d7757669756" Feb 19 16:28:35 crc kubenswrapper[4810]: I0219 16:28:35.275825 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d64cbe5478a9ad4d9cd763d9f5b1175310a03b2d98326cddede9d7757669756"} err="failed to get container status \"1d64cbe5478a9ad4d9cd763d9f5b1175310a03b2d98326cddede9d7757669756\": rpc error: code = NotFound desc = could not find container \"1d64cbe5478a9ad4d9cd763d9f5b1175310a03b2d98326cddede9d7757669756\": container with ID starting with 1d64cbe5478a9ad4d9cd763d9f5b1175310a03b2d98326cddede9d7757669756 not found: ID does not exist" Feb 19 16:28:35 crc kubenswrapper[4810]: I0219 16:28:35.451614 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb4ac5e-9779-4e63-8008-83368b12aea5" path="/var/lib/kubelet/pods/eeb4ac5e-9779-4e63-8008-83368b12aea5/volumes" Feb 19 16:28:35 crc kubenswrapper[4810]: I0219 16:28:35.763387 4810 generic.go:334] "Generic (PLEG): container finished" podID="6dde1ea5-68be-4851-8816-3c7302dc2579" containerID="c60a4f5da78e4d16d81eacb1c4cfea0a5df0810a9ff302334c88f13711140033" exitCode=0 Feb 19 16:28:35 crc kubenswrapper[4810]: I0219 16:28:35.763508 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5n9gc" event={"ID":"6dde1ea5-68be-4851-8816-3c7302dc2579","Type":"ContainerDied","Data":"c60a4f5da78e4d16d81eacb1c4cfea0a5df0810a9ff302334c88f13711140033"} Feb 19 16:28:35 crc kubenswrapper[4810]: I0219 16:28:35.763570 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5n9gc" event={"ID":"6dde1ea5-68be-4851-8816-3c7302dc2579","Type":"ContainerStarted","Data":"280973c57a3f46eeb314d42cd9ad11d9e2b63939b685ed7e15c043b29db4262c"} Feb 19 16:28:42 crc kubenswrapper[4810]: I0219 16:28:42.834683 4810 generic.go:334] "Generic (PLEG): container finished" podID="6dde1ea5-68be-4851-8816-3c7302dc2579" containerID="8046aa230ca2b4fe6d7939a4777943e6f17a788dfab6e0b652084234b84eaf5e" exitCode=0 Feb 19 16:28:42 crc kubenswrapper[4810]: I0219 16:28:42.834955 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5n9gc" event={"ID":"6dde1ea5-68be-4851-8816-3c7302dc2579","Type":"ContainerDied","Data":"8046aa230ca2b4fe6d7939a4777943e6f17a788dfab6e0b652084234b84eaf5e"} Feb 19 16:28:43 crc kubenswrapper[4810]: I0219 16:28:43.049651 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-64pwn" podUID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerName="registry-server" probeResult="failure" output=< Feb 19 16:28:43 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 16:28:43 crc kubenswrapper[4810]: > Feb 19 16:28:43 crc kubenswrapper[4810]: I0219 16:28:43.854058 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5n9gc" event={"ID":"6dde1ea5-68be-4851-8816-3c7302dc2579","Type":"ContainerStarted","Data":"c46982b47a689551c84c8a94559da6c4aba508584e390cc906dbadf87e5b2444"} Feb 19 16:28:43 crc kubenswrapper[4810]: I0219 16:28:43.894828 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5n9gc" podStartSLOduration=3.4006189989999998 podStartE2EDuration="10.894810426s" podCreationTimestamp="2026-02-19 16:28:33 +0000 UTC" firstStartedPulling="2026-02-19 16:28:35.765308918 +0000 UTC m=+4745.247339082" lastFinishedPulling="2026-02-19 16:28:43.259500385 +0000 UTC m=+4752.741530509" observedRunningTime="2026-02-19 16:28:43.886549272 +0000 UTC m=+4753.368579396" watchObservedRunningTime="2026-02-19 16:28:43.894810426 +0000 UTC m=+4753.376840550" Feb 19 16:28:44 crc kubenswrapper[4810]: I0219 16:28:44.253913 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:44 crc kubenswrapper[4810]: I0219 16:28:44.253952 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:45 crc kubenswrapper[4810]: I0219 16:28:45.313984 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5n9gc" podUID="6dde1ea5-68be-4851-8816-3c7302dc2579" containerName="registry-server" probeResult="failure" output=< Feb 19 16:28:45 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 16:28:45 crc kubenswrapper[4810]: > Feb 19 16:28:52 crc kubenswrapper[4810]: I0219 16:28:52.059107 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:52 crc kubenswrapper[4810]: I0219 16:28:52.134000 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:52 crc kubenswrapper[4810]: I0219 16:28:52.308110 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-64pwn"] Feb 19 16:28:53 crc kubenswrapper[4810]: I0219 16:28:53.979600 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-64pwn" podUID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerName="registry-server" containerID="cri-o://81668e94dc20932a1366bc891e92da4a77667055ce44a21f7d966c87ad5627e7" gracePeriod=2 Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.330146 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.403521 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.517454 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5n9gc"] Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.557652 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.631986 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e626cbc-4140-4c84-8ecc-8b3315f6023d-catalog-content\") pod \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\" (UID: \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\") " Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.632288 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gnbp\" (UniqueName: \"kubernetes.io/projected/9e626cbc-4140-4c84-8ecc-8b3315f6023d-kube-api-access-6gnbp\") pod \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\" (UID: \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\") " Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.632314 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e626cbc-4140-4c84-8ecc-8b3315f6023d-utilities\") pod \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\" (UID: \"9e626cbc-4140-4c84-8ecc-8b3315f6023d\") " Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.633045 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e626cbc-4140-4c84-8ecc-8b3315f6023d-utilities" (OuterVolumeSpecName: "utilities") pod "9e626cbc-4140-4c84-8ecc-8b3315f6023d" (UID: "9e626cbc-4140-4c84-8ecc-8b3315f6023d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.638207 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e626cbc-4140-4c84-8ecc-8b3315f6023d-kube-api-access-6gnbp" (OuterVolumeSpecName: "kube-api-access-6gnbp") pod "9e626cbc-4140-4c84-8ecc-8b3315f6023d" (UID: "9e626cbc-4140-4c84-8ecc-8b3315f6023d"). InnerVolumeSpecName "kube-api-access-6gnbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.703121 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-96zmk"] Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.703678 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-96zmk" podUID="78aaed3c-dfb4-4332-bc63-4fc5342870ae" containerName="registry-server" containerID="cri-o://d3b7a9028d72c7eb4783b51f96495dc71d42e8907df523028cceebd689902934" gracePeriod=2 Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.734412 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gnbp\" (UniqueName: \"kubernetes.io/projected/9e626cbc-4140-4c84-8ecc-8b3315f6023d-kube-api-access-6gnbp\") on node \"crc\" DevicePath \"\"" Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.734440 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e626cbc-4140-4c84-8ecc-8b3315f6023d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.813265 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e626cbc-4140-4c84-8ecc-8b3315f6023d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e626cbc-4140-4c84-8ecc-8b3315f6023d" (UID: "9e626cbc-4140-4c84-8ecc-8b3315f6023d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.836527 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e626cbc-4140-4c84-8ecc-8b3315f6023d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.991682 4810 generic.go:334] "Generic (PLEG): container finished" podID="78aaed3c-dfb4-4332-bc63-4fc5342870ae" containerID="d3b7a9028d72c7eb4783b51f96495dc71d42e8907df523028cceebd689902934" exitCode=0 Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.991769 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96zmk" event={"ID":"78aaed3c-dfb4-4332-bc63-4fc5342870ae","Type":"ContainerDied","Data":"d3b7a9028d72c7eb4783b51f96495dc71d42e8907df523028cceebd689902934"} Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.994305 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerID="81668e94dc20932a1366bc891e92da4a77667055ce44a21f7d966c87ad5627e7" exitCode=0 Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.994395 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64pwn" Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.994435 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64pwn" event={"ID":"9e626cbc-4140-4c84-8ecc-8b3315f6023d","Type":"ContainerDied","Data":"81668e94dc20932a1366bc891e92da4a77667055ce44a21f7d966c87ad5627e7"} Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.994463 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64pwn" event={"ID":"9e626cbc-4140-4c84-8ecc-8b3315f6023d","Type":"ContainerDied","Data":"565952d67109a9fcad8e71d5e0aabde77b113979aa3c289f2b682f3ba5f316a6"} Feb 19 16:28:54 crc kubenswrapper[4810]: I0219 16:28:54.994485 4810 scope.go:117] "RemoveContainer" containerID="81668e94dc20932a1366bc891e92da4a77667055ce44a21f7d966c87ad5627e7" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.025521 4810 scope.go:117] "RemoveContainer" containerID="11aae31930dc1715ed8dc4d319b46478d41e580c6f80c8f65ea1a3331f2c975b" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.095156 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-64pwn"] Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.105315 4810 scope.go:117] "RemoveContainer" containerID="ca5d92eef27bd3b021224c9164b7e74cb5a78a5abd43eb0d862a50d3ae8cd3f9" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.120190 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-64pwn"] Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.228498 4810 scope.go:117] "RemoveContainer" containerID="81668e94dc20932a1366bc891e92da4a77667055ce44a21f7d966c87ad5627e7" Feb 19 16:28:55 crc kubenswrapper[4810]: E0219 16:28:55.230072 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81668e94dc20932a1366bc891e92da4a77667055ce44a21f7d966c87ad5627e7\": container with ID starting with 81668e94dc20932a1366bc891e92da4a77667055ce44a21f7d966c87ad5627e7 not found: ID does not exist" containerID="81668e94dc20932a1366bc891e92da4a77667055ce44a21f7d966c87ad5627e7" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.230118 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81668e94dc20932a1366bc891e92da4a77667055ce44a21f7d966c87ad5627e7"} err="failed to get container status \"81668e94dc20932a1366bc891e92da4a77667055ce44a21f7d966c87ad5627e7\": rpc error: code = NotFound desc = could not find container \"81668e94dc20932a1366bc891e92da4a77667055ce44a21f7d966c87ad5627e7\": container with ID starting with 81668e94dc20932a1366bc891e92da4a77667055ce44a21f7d966c87ad5627e7 not found: ID does not exist" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.230148 4810 scope.go:117] "RemoveContainer" containerID="11aae31930dc1715ed8dc4d319b46478d41e580c6f80c8f65ea1a3331f2c975b" Feb 19 16:28:55 crc kubenswrapper[4810]: E0219 16:28:55.230461 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11aae31930dc1715ed8dc4d319b46478d41e580c6f80c8f65ea1a3331f2c975b\": container with ID starting with 11aae31930dc1715ed8dc4d319b46478d41e580c6f80c8f65ea1a3331f2c975b not found: ID does not exist" containerID="11aae31930dc1715ed8dc4d319b46478d41e580c6f80c8f65ea1a3331f2c975b" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.230499 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11aae31930dc1715ed8dc4d319b46478d41e580c6f80c8f65ea1a3331f2c975b"} err="failed to get container status \"11aae31930dc1715ed8dc4d319b46478d41e580c6f80c8f65ea1a3331f2c975b\": rpc error: code = NotFound desc = could not find container \"11aae31930dc1715ed8dc4d319b46478d41e580c6f80c8f65ea1a3331f2c975b\": container with ID starting with 11aae31930dc1715ed8dc4d319b46478d41e580c6f80c8f65ea1a3331f2c975b not found: ID does not exist" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.230530 4810 scope.go:117] "RemoveContainer" containerID="ca5d92eef27bd3b021224c9164b7e74cb5a78a5abd43eb0d862a50d3ae8cd3f9" Feb 19 16:28:55 crc kubenswrapper[4810]: E0219 16:28:55.233875 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca5d92eef27bd3b021224c9164b7e74cb5a78a5abd43eb0d862a50d3ae8cd3f9\": container with ID starting with ca5d92eef27bd3b021224c9164b7e74cb5a78a5abd43eb0d862a50d3ae8cd3f9 not found: ID does not exist" containerID="ca5d92eef27bd3b021224c9164b7e74cb5a78a5abd43eb0d862a50d3ae8cd3f9" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.233922 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca5d92eef27bd3b021224c9164b7e74cb5a78a5abd43eb0d862a50d3ae8cd3f9"} err="failed to get container status \"ca5d92eef27bd3b021224c9164b7e74cb5a78a5abd43eb0d862a50d3ae8cd3f9\": rpc error: code = NotFound desc = could not find container \"ca5d92eef27bd3b021224c9164b7e74cb5a78a5abd43eb0d862a50d3ae8cd3f9\": container with ID starting with ca5d92eef27bd3b021224c9164b7e74cb5a78a5abd43eb0d862a50d3ae8cd3f9 not found: ID does not exist" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.263812 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96zmk" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.358435 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aaed3c-dfb4-4332-bc63-4fc5342870ae-catalog-content\") pod \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\" (UID: \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\") " Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.358476 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aaed3c-dfb4-4332-bc63-4fc5342870ae-utilities\") pod \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\" (UID: \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\") " Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.358595 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7wtj\" (UniqueName: \"kubernetes.io/projected/78aaed3c-dfb4-4332-bc63-4fc5342870ae-kube-api-access-z7wtj\") pod \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\" (UID: \"78aaed3c-dfb4-4332-bc63-4fc5342870ae\") " Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.360298 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78aaed3c-dfb4-4332-bc63-4fc5342870ae-utilities" (OuterVolumeSpecName: "utilities") pod "78aaed3c-dfb4-4332-bc63-4fc5342870ae" (UID: "78aaed3c-dfb4-4332-bc63-4fc5342870ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.382445 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78aaed3c-dfb4-4332-bc63-4fc5342870ae-kube-api-access-z7wtj" (OuterVolumeSpecName: "kube-api-access-z7wtj") pod "78aaed3c-dfb4-4332-bc63-4fc5342870ae" (UID: "78aaed3c-dfb4-4332-bc63-4fc5342870ae"). InnerVolumeSpecName "kube-api-access-z7wtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.408998 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78aaed3c-dfb4-4332-bc63-4fc5342870ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78aaed3c-dfb4-4332-bc63-4fc5342870ae" (UID: "78aaed3c-dfb4-4332-bc63-4fc5342870ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.452090 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" path="/var/lib/kubelet/pods/9e626cbc-4140-4c84-8ecc-8b3315f6023d/volumes" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.460997 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7wtj\" (UniqueName: \"kubernetes.io/projected/78aaed3c-dfb4-4332-bc63-4fc5342870ae-kube-api-access-z7wtj\") on node \"crc\" DevicePath \"\"" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.461031 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aaed3c-dfb4-4332-bc63-4fc5342870ae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:28:55 crc kubenswrapper[4810]: I0219 16:28:55.461039 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aaed3c-dfb4-4332-bc63-4fc5342870ae-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:28:56 crc kubenswrapper[4810]: I0219 16:28:56.005881 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96zmk" event={"ID":"78aaed3c-dfb4-4332-bc63-4fc5342870ae","Type":"ContainerDied","Data":"dc3bb4ddeed72ae65273594c85ea1bf8c3ee8a4155cd199ebba0a1a239217684"} Feb 19 16:28:56 crc kubenswrapper[4810]: I0219 16:28:56.006229 4810 scope.go:117] "RemoveContainer" containerID="d3b7a9028d72c7eb4783b51f96495dc71d42e8907df523028cceebd689902934" Feb 19 16:28:56 crc kubenswrapper[4810]: I0219 16:28:56.005910 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96zmk" Feb 19 16:28:56 crc kubenswrapper[4810]: I0219 16:28:56.037462 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-96zmk"] Feb 19 16:28:56 crc kubenswrapper[4810]: I0219 16:28:56.048141 4810 scope.go:117] "RemoveContainer" containerID="6b9165a9af4fc64c1f899ec8f99221cc29231316d69d504fd16ef6a4ce57525c" Feb 19 16:28:56 crc kubenswrapper[4810]: I0219 16:28:56.051723 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-96zmk"] Feb 19 16:28:56 crc kubenswrapper[4810]: I0219 16:28:56.075529 4810 scope.go:117] "RemoveContainer" containerID="389e0276f99c2489270efb421c00ea721017dbf02f8163a0b8723a2949f2384c" Feb 19 16:28:57 crc kubenswrapper[4810]: I0219 16:28:57.450063 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78aaed3c-dfb4-4332-bc63-4fc5342870ae" path="/var/lib/kubelet/pods/78aaed3c-dfb4-4332-bc63-4fc5342870ae/volumes" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.119123 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqm9"] Feb 19 16:29:01 crc kubenswrapper[4810]: E0219 16:29:01.119972 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78aaed3c-dfb4-4332-bc63-4fc5342870ae" containerName="extract-utilities" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.119985 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="78aaed3c-dfb4-4332-bc63-4fc5342870ae" containerName="extract-utilities" Feb 19 16:29:01 crc kubenswrapper[4810]: E0219 16:29:01.119999 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerName="registry-server" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.120005 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerName="registry-server" Feb 19 16:29:01 crc kubenswrapper[4810]: E0219 16:29:01.120014 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb4ac5e-9779-4e63-8008-83368b12aea5" containerName="registry-server" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.120020 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb4ac5e-9779-4e63-8008-83368b12aea5" containerName="registry-server" Feb 19 16:29:01 crc kubenswrapper[4810]: E0219 16:29:01.120032 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerName="extract-utilities" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.120038 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerName="extract-utilities" Feb 19 16:29:01 crc kubenswrapper[4810]: E0219 16:29:01.120047 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb4ac5e-9779-4e63-8008-83368b12aea5" containerName="extract-content" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.120053 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb4ac5e-9779-4e63-8008-83368b12aea5" containerName="extract-content" Feb 19 16:29:01 crc kubenswrapper[4810]: E0219 16:29:01.120072 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb4ac5e-9779-4e63-8008-83368b12aea5" containerName="extract-utilities" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.120077 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb4ac5e-9779-4e63-8008-83368b12aea5" containerName="extract-utilities" Feb 19 16:29:01 crc kubenswrapper[4810]: E0219 16:29:01.120093 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78aaed3c-dfb4-4332-bc63-4fc5342870ae" containerName="registry-server" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.120098 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="78aaed3c-dfb4-4332-bc63-4fc5342870ae" containerName="registry-server" Feb 19 16:29:01 crc kubenswrapper[4810]: E0219 16:29:01.120110 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerName="extract-content" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.120115 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerName="extract-content" Feb 19 16:29:01 crc kubenswrapper[4810]: E0219 16:29:01.120129 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78aaed3c-dfb4-4332-bc63-4fc5342870ae" containerName="extract-content" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.120134 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="78aaed3c-dfb4-4332-bc63-4fc5342870ae" containerName="extract-content" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.120317 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb4ac5e-9779-4e63-8008-83368b12aea5" containerName="registry-server" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.120349 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="78aaed3c-dfb4-4332-bc63-4fc5342870ae" containerName="registry-server" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.120360 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e626cbc-4140-4c84-8ecc-8b3315f6023d" containerName="registry-server" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.121720 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.140044 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqm9"] Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.286306 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a673077a-0e58-4ad8-a5db-f6eeb283be61-utilities\") pod \"redhat-marketplace-rgqm9\" (UID: \"a673077a-0e58-4ad8-a5db-f6eeb283be61\") " pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.286438 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xpm7\" (UniqueName: \"kubernetes.io/projected/a673077a-0e58-4ad8-a5db-f6eeb283be61-kube-api-access-4xpm7\") pod \"redhat-marketplace-rgqm9\" (UID: \"a673077a-0e58-4ad8-a5db-f6eeb283be61\") " pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.286488 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a673077a-0e58-4ad8-a5db-f6eeb283be61-catalog-content\") pod \"redhat-marketplace-rgqm9\" (UID: \"a673077a-0e58-4ad8-a5db-f6eeb283be61\") " pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.388698 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a673077a-0e58-4ad8-a5db-f6eeb283be61-utilities\") pod \"redhat-marketplace-rgqm9\" (UID: \"a673077a-0e58-4ad8-a5db-f6eeb283be61\") " pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.388817 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xpm7\" (UniqueName: \"kubernetes.io/projected/a673077a-0e58-4ad8-a5db-f6eeb283be61-kube-api-access-4xpm7\") pod \"redhat-marketplace-rgqm9\" (UID: \"a673077a-0e58-4ad8-a5db-f6eeb283be61\") " pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.388861 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a673077a-0e58-4ad8-a5db-f6eeb283be61-catalog-content\") pod \"redhat-marketplace-rgqm9\" (UID: \"a673077a-0e58-4ad8-a5db-f6eeb283be61\") " pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.389521 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a673077a-0e58-4ad8-a5db-f6eeb283be61-catalog-content\") pod \"redhat-marketplace-rgqm9\" (UID: \"a673077a-0e58-4ad8-a5db-f6eeb283be61\") " pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.389530 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a673077a-0e58-4ad8-a5db-f6eeb283be61-utilities\") pod \"redhat-marketplace-rgqm9\" (UID: \"a673077a-0e58-4ad8-a5db-f6eeb283be61\") " pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.427364 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xpm7\" (UniqueName: \"kubernetes.io/projected/a673077a-0e58-4ad8-a5db-f6eeb283be61-kube-api-access-4xpm7\") pod \"redhat-marketplace-rgqm9\" (UID: \"a673077a-0e58-4ad8-a5db-f6eeb283be61\") " pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.446562 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:01 crc kubenswrapper[4810]: I0219 16:29:01.916512 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqm9"] Feb 19 16:29:02 crc kubenswrapper[4810]: I0219 16:29:02.070493 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqm9" event={"ID":"a673077a-0e58-4ad8-a5db-f6eeb283be61","Type":"ContainerStarted","Data":"9bf23c8154e85ae548ff97afcec39329d16c32d382c90a3d6ac8b672224668f7"} Feb 19 16:29:03 crc kubenswrapper[4810]: I0219 16:29:03.084286 4810 generic.go:334] "Generic (PLEG): container finished" podID="a673077a-0e58-4ad8-a5db-f6eeb283be61" containerID="ae4c13191ff8d66385f46593ebb524ac0d7b604bcdad265511c266dac42d3ed2" exitCode=0 Feb 19 16:29:03 crc kubenswrapper[4810]: I0219 16:29:03.084372 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqm9" event={"ID":"a673077a-0e58-4ad8-a5db-f6eeb283be61","Type":"ContainerDied","Data":"ae4c13191ff8d66385f46593ebb524ac0d7b604bcdad265511c266dac42d3ed2"} Feb 19 16:29:04 crc kubenswrapper[4810]: I0219 16:29:04.096395 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqm9" event={"ID":"a673077a-0e58-4ad8-a5db-f6eeb283be61","Type":"ContainerStarted","Data":"710dcf4fbde260c53ab3771ea6e3f9c3d8f23b24754ad03ee8737a762d01c1ca"} Feb 19 16:29:05 crc kubenswrapper[4810]: I0219 16:29:05.112646 4810 generic.go:334] "Generic (PLEG): container finished" podID="a673077a-0e58-4ad8-a5db-f6eeb283be61" containerID="710dcf4fbde260c53ab3771ea6e3f9c3d8f23b24754ad03ee8737a762d01c1ca" exitCode=0 Feb 19 16:29:05 crc kubenswrapper[4810]: I0219 16:29:05.112711 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqm9" event={"ID":"a673077a-0e58-4ad8-a5db-f6eeb283be61","Type":"ContainerDied","Data":"710dcf4fbde260c53ab3771ea6e3f9c3d8f23b24754ad03ee8737a762d01c1ca"} Feb 19 16:29:06 crc kubenswrapper[4810]: I0219 16:29:06.124117 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqm9" event={"ID":"a673077a-0e58-4ad8-a5db-f6eeb283be61","Type":"ContainerStarted","Data":"66857f6c81a47c65bb164b0a03fa49e824fc23f4623166e8ae894a6257ce21a9"} Feb 19 16:29:06 crc kubenswrapper[4810]: I0219 16:29:06.150946 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgqm9" podStartSLOduration=2.486515931 podStartE2EDuration="5.150927344s" podCreationTimestamp="2026-02-19 16:29:01 +0000 UTC" firstStartedPulling="2026-02-19 16:29:03.08721296 +0000 UTC m=+4772.569243124" lastFinishedPulling="2026-02-19 16:29:05.751624373 +0000 UTC m=+4775.233654537" observedRunningTime="2026-02-19 16:29:06.141384927 +0000 UTC m=+4775.623415071" watchObservedRunningTime="2026-02-19 16:29:06.150927344 +0000 UTC m=+4775.632957468" Feb 19 16:29:11 crc kubenswrapper[4810]: I0219 16:29:11.465662 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:11 crc kubenswrapper[4810]: I0219 16:29:11.466304 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:11 crc kubenswrapper[4810]: I0219 16:29:11.532079 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:12 crc kubenswrapper[4810]: I0219 16:29:12.274231 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:12 crc kubenswrapper[4810]: I0219 16:29:12.356604 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqm9"] Feb 19 16:29:14 crc kubenswrapper[4810]: I0219 16:29:14.215054 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rgqm9" podUID="a673077a-0e58-4ad8-a5db-f6eeb283be61" containerName="registry-server" containerID="cri-o://66857f6c81a47c65bb164b0a03fa49e824fc23f4623166e8ae894a6257ce21a9" gracePeriod=2 Feb 19 16:29:14 crc kubenswrapper[4810]: I0219 16:29:14.771979 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:14 crc kubenswrapper[4810]: I0219 16:29:14.886355 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a673077a-0e58-4ad8-a5db-f6eeb283be61-catalog-content\") pod \"a673077a-0e58-4ad8-a5db-f6eeb283be61\" (UID: \"a673077a-0e58-4ad8-a5db-f6eeb283be61\") " Feb 19 16:29:14 crc kubenswrapper[4810]: I0219 16:29:14.886799 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xpm7\" (UniqueName: \"kubernetes.io/projected/a673077a-0e58-4ad8-a5db-f6eeb283be61-kube-api-access-4xpm7\") pod \"a673077a-0e58-4ad8-a5db-f6eeb283be61\" (UID: \"a673077a-0e58-4ad8-a5db-f6eeb283be61\") " Feb 19 16:29:14 crc kubenswrapper[4810]: I0219 16:29:14.886989 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a673077a-0e58-4ad8-a5db-f6eeb283be61-utilities\") pod \"a673077a-0e58-4ad8-a5db-f6eeb283be61\" (UID: \"a673077a-0e58-4ad8-a5db-f6eeb283be61\") " Feb 19 16:29:14 crc kubenswrapper[4810]: I0219 16:29:14.887738 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a673077a-0e58-4ad8-a5db-f6eeb283be61-utilities" (OuterVolumeSpecName: "utilities") pod "a673077a-0e58-4ad8-a5db-f6eeb283be61" (UID: "a673077a-0e58-4ad8-a5db-f6eeb283be61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:29:14 crc kubenswrapper[4810]: I0219 16:29:14.888061 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a673077a-0e58-4ad8-a5db-f6eeb283be61-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:29:14 crc kubenswrapper[4810]: I0219 16:29:14.893276 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a673077a-0e58-4ad8-a5db-f6eeb283be61-kube-api-access-4xpm7" (OuterVolumeSpecName: "kube-api-access-4xpm7") pod "a673077a-0e58-4ad8-a5db-f6eeb283be61" (UID: "a673077a-0e58-4ad8-a5db-f6eeb283be61"). InnerVolumeSpecName "kube-api-access-4xpm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:29:14 crc kubenswrapper[4810]: I0219 16:29:14.908064 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a673077a-0e58-4ad8-a5db-f6eeb283be61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a673077a-0e58-4ad8-a5db-f6eeb283be61" (UID: "a673077a-0e58-4ad8-a5db-f6eeb283be61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:29:14 crc kubenswrapper[4810]: I0219 16:29:14.990406 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a673077a-0e58-4ad8-a5db-f6eeb283be61-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:29:14 crc kubenswrapper[4810]: I0219 16:29:14.990437 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xpm7\" (UniqueName: \"kubernetes.io/projected/a673077a-0e58-4ad8-a5db-f6eeb283be61-kube-api-access-4xpm7\") on node \"crc\" DevicePath \"\"" Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.231599 4810 generic.go:334] "Generic (PLEG): container finished" podID="a673077a-0e58-4ad8-a5db-f6eeb283be61" containerID="66857f6c81a47c65bb164b0a03fa49e824fc23f4623166e8ae894a6257ce21a9" exitCode=0 Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.231684 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgqm9" Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.231728 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqm9" event={"ID":"a673077a-0e58-4ad8-a5db-f6eeb283be61","Type":"ContainerDied","Data":"66857f6c81a47c65bb164b0a03fa49e824fc23f4623166e8ae894a6257ce21a9"} Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.232107 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqm9" event={"ID":"a673077a-0e58-4ad8-a5db-f6eeb283be61","Type":"ContainerDied","Data":"9bf23c8154e85ae548ff97afcec39329d16c32d382c90a3d6ac8b672224668f7"} Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.232136 4810 scope.go:117] "RemoveContainer" containerID="66857f6c81a47c65bb164b0a03fa49e824fc23f4623166e8ae894a6257ce21a9" Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.270116 4810 scope.go:117] "RemoveContainer" containerID="710dcf4fbde260c53ab3771ea6e3f9c3d8f23b24754ad03ee8737a762d01c1ca" Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.279420 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqm9"] Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.290622 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqm9"] Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.301622 4810 scope.go:117] "RemoveContainer" containerID="ae4c13191ff8d66385f46593ebb524ac0d7b604bcdad265511c266dac42d3ed2" Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.374460 4810 scope.go:117] "RemoveContainer" containerID="66857f6c81a47c65bb164b0a03fa49e824fc23f4623166e8ae894a6257ce21a9" Feb 19 16:29:15 crc kubenswrapper[4810]: E0219 16:29:15.374910 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66857f6c81a47c65bb164b0a03fa49e824fc23f4623166e8ae894a6257ce21a9\": container with ID starting with 66857f6c81a47c65bb164b0a03fa49e824fc23f4623166e8ae894a6257ce21a9 not found: ID does not exist" containerID="66857f6c81a47c65bb164b0a03fa49e824fc23f4623166e8ae894a6257ce21a9" Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.374951 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66857f6c81a47c65bb164b0a03fa49e824fc23f4623166e8ae894a6257ce21a9"} err="failed to get container status \"66857f6c81a47c65bb164b0a03fa49e824fc23f4623166e8ae894a6257ce21a9\": rpc error: code = NotFound desc = could not find container \"66857f6c81a47c65bb164b0a03fa49e824fc23f4623166e8ae894a6257ce21a9\": container with ID starting with 66857f6c81a47c65bb164b0a03fa49e824fc23f4623166e8ae894a6257ce21a9 not found: ID does not exist" Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.374982 4810 scope.go:117] "RemoveContainer" containerID="710dcf4fbde260c53ab3771ea6e3f9c3d8f23b24754ad03ee8737a762d01c1ca" Feb 19 16:29:15 crc kubenswrapper[4810]: E0219 16:29:15.375382 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"710dcf4fbde260c53ab3771ea6e3f9c3d8f23b24754ad03ee8737a762d01c1ca\": container with ID starting with 710dcf4fbde260c53ab3771ea6e3f9c3d8f23b24754ad03ee8737a762d01c1ca not found: ID does not exist" containerID="710dcf4fbde260c53ab3771ea6e3f9c3d8f23b24754ad03ee8737a762d01c1ca" Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.375423 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"710dcf4fbde260c53ab3771ea6e3f9c3d8f23b24754ad03ee8737a762d01c1ca"} err="failed to get container status \"710dcf4fbde260c53ab3771ea6e3f9c3d8f23b24754ad03ee8737a762d01c1ca\": rpc error: code = NotFound desc = could not find container \"710dcf4fbde260c53ab3771ea6e3f9c3d8f23b24754ad03ee8737a762d01c1ca\": container with ID starting with 710dcf4fbde260c53ab3771ea6e3f9c3d8f23b24754ad03ee8737a762d01c1ca not found: ID does not exist" Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.375449 4810 scope.go:117] "RemoveContainer" containerID="ae4c13191ff8d66385f46593ebb524ac0d7b604bcdad265511c266dac42d3ed2" Feb 19 16:29:15 crc kubenswrapper[4810]: E0219 16:29:15.375749 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae4c13191ff8d66385f46593ebb524ac0d7b604bcdad265511c266dac42d3ed2\": container with ID starting with ae4c13191ff8d66385f46593ebb524ac0d7b604bcdad265511c266dac42d3ed2 not found: ID does not exist" containerID="ae4c13191ff8d66385f46593ebb524ac0d7b604bcdad265511c266dac42d3ed2" Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.375774 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae4c13191ff8d66385f46593ebb524ac0d7b604bcdad265511c266dac42d3ed2"} err="failed to get container status \"ae4c13191ff8d66385f46593ebb524ac0d7b604bcdad265511c266dac42d3ed2\": rpc error: code = NotFound desc = could not find container \"ae4c13191ff8d66385f46593ebb524ac0d7b604bcdad265511c266dac42d3ed2\": container with ID starting with ae4c13191ff8d66385f46593ebb524ac0d7b604bcdad265511c266dac42d3ed2 not found: ID does not exist" Feb 19 16:29:15 crc kubenswrapper[4810]: I0219 16:29:15.457165 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a673077a-0e58-4ad8-a5db-f6eeb283be61" path="/var/lib/kubelet/pods/a673077a-0e58-4ad8-a5db-f6eeb283be61/volumes" Feb 19 16:29:19 crc kubenswrapper[4810]: I0219 16:29:19.537913 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:29:19 crc kubenswrapper[4810]: I0219 16:29:19.538501 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:29:49 crc kubenswrapper[4810]: I0219 16:29:49.537153 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:29:49 crc kubenswrapper[4810]: I0219 16:29:49.537728 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.163179 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748"] Feb 19 16:30:00 crc kubenswrapper[4810]: E0219 16:30:00.164367 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a673077a-0e58-4ad8-a5db-f6eeb283be61" containerName="registry-server" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.164389 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a673077a-0e58-4ad8-a5db-f6eeb283be61" containerName="registry-server" Feb 19 16:30:00 crc kubenswrapper[4810]: E0219 16:30:00.164438 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a673077a-0e58-4ad8-a5db-f6eeb283be61" containerName="extract-utilities" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.164446 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a673077a-0e58-4ad8-a5db-f6eeb283be61" containerName="extract-utilities" Feb 19 16:30:00 crc kubenswrapper[4810]: E0219 16:30:00.164463 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a673077a-0e58-4ad8-a5db-f6eeb283be61" containerName="extract-content" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.164471 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a673077a-0e58-4ad8-a5db-f6eeb283be61" containerName="extract-content" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.164703 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a673077a-0e58-4ad8-a5db-f6eeb283be61" containerName="registry-server" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.165688 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.168668 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.169472 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.183362 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748"] Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.196909 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-secret-volume\") pod \"collect-profiles-29525310-6p748\" (UID: \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.197305 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2srws\" (UniqueName: \"kubernetes.io/projected/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-kube-api-access-2srws\") pod \"collect-profiles-29525310-6p748\" (UID: \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.197368 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-config-volume\") pod \"collect-profiles-29525310-6p748\" (UID: \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.299141 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2srws\" (UniqueName: \"kubernetes.io/projected/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-kube-api-access-2srws\") pod \"collect-profiles-29525310-6p748\" (UID: \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.299202 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-config-volume\") pod \"collect-profiles-29525310-6p748\" (UID: \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.299485 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-secret-volume\") pod \"collect-profiles-29525310-6p748\" (UID: \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.300316 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-config-volume\") pod \"collect-profiles-29525310-6p748\" (UID: \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.307018 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-secret-volume\") pod \"collect-profiles-29525310-6p748\" (UID: \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.334770 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2srws\" (UniqueName: \"kubernetes.io/projected/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-kube-api-access-2srws\") pod \"collect-profiles-29525310-6p748\" (UID: \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" Feb 19 16:30:00 crc kubenswrapper[4810]: I0219 16:30:00.498137 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" Feb 19 16:30:01 crc kubenswrapper[4810]: I0219 16:30:01.022486 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748"] Feb 19 16:30:01 crc kubenswrapper[4810]: I0219 16:30:01.820297 4810 generic.go:334] "Generic (PLEG): container finished" podID="c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a" containerID="bd22ffcc93ad514f9c9f0fd30ea32bc13e72bf86f993fc9a5d41d80446f59f0f" exitCode=0 Feb 19 16:30:01 crc kubenswrapper[4810]: I0219 16:30:01.820840 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" event={"ID":"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a","Type":"ContainerDied","Data":"bd22ffcc93ad514f9c9f0fd30ea32bc13e72bf86f993fc9a5d41d80446f59f0f"} Feb 19 16:30:01 crc kubenswrapper[4810]: I0219 16:30:01.821071 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" event={"ID":"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a","Type":"ContainerStarted","Data":"d65d3016f5d8bd6cf29a5305a05195f6a5475db2ee983b718f17ecf85f25023c"} Feb 19 16:30:03 crc kubenswrapper[4810]: I0219 16:30:03.308098 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" Feb 19 16:30:03 crc kubenswrapper[4810]: I0219 16:30:03.380003 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-secret-volume\") pod \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\" (UID: \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\") " Feb 19 16:30:03 crc kubenswrapper[4810]: I0219 16:30:03.380218 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-config-volume\") pod \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\" (UID: \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\") " Feb 19 16:30:03 crc kubenswrapper[4810]: I0219 16:30:03.380381 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2srws\" (UniqueName: \"kubernetes.io/projected/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-kube-api-access-2srws\") pod \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\" (UID: \"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a\") " Feb 19 16:30:03 crc kubenswrapper[4810]: I0219 16:30:03.381438 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-config-volume" (OuterVolumeSpecName: "config-volume") pod "c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a" (UID: "c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 16:30:03 crc kubenswrapper[4810]: I0219 16:30:03.390731 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a" (UID: "c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:30:03 crc kubenswrapper[4810]: I0219 16:30:03.391723 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-kube-api-access-2srws" (OuterVolumeSpecName: "kube-api-access-2srws") pod "c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a" (UID: "c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a"). InnerVolumeSpecName "kube-api-access-2srws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:30:03 crc kubenswrapper[4810]: I0219 16:30:03.484160 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 16:30:03 crc kubenswrapper[4810]: I0219 16:30:03.484565 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2srws\" (UniqueName: \"kubernetes.io/projected/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-kube-api-access-2srws\") on node \"crc\" DevicePath \"\"" Feb 19 16:30:03 crc kubenswrapper[4810]: I0219 16:30:03.484708 4810 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 16:30:03 crc kubenswrapper[4810]: I0219 16:30:03.846722 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" event={"ID":"c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a","Type":"ContainerDied","Data":"d65d3016f5d8bd6cf29a5305a05195f6a5475db2ee983b718f17ecf85f25023c"} Feb 19 16:30:03 crc kubenswrapper[4810]: I0219 16:30:03.847062 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d65d3016f5d8bd6cf29a5305a05195f6a5475db2ee983b718f17ecf85f25023c" Feb 19 16:30:03 crc kubenswrapper[4810]: I0219 16:30:03.847132 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525310-6p748" Feb 19 16:30:04 crc kubenswrapper[4810]: I0219 16:30:04.413385 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb"] Feb 19 16:30:04 crc kubenswrapper[4810]: I0219 16:30:04.423576 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525265-dnbnb"] Feb 19 16:30:05 crc kubenswrapper[4810]: I0219 16:30:05.458718 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ebe856b-d546-48e1-862d-d9f039620b73" path="/var/lib/kubelet/pods/9ebe856b-d546-48e1-862d-d9f039620b73/volumes" Feb 19 16:30:19 crc kubenswrapper[4810]: I0219 16:30:19.537841 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:30:19 crc kubenswrapper[4810]: I0219 16:30:19.538792 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:30:19 crc kubenswrapper[4810]: I0219 16:30:19.538882 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 16:30:19 crc kubenswrapper[4810]: I0219 16:30:19.540375 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61e1161022488800f1a95fde17c57eb209b943fcdd172d04938731e0ad552ce2"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 16:30:19 crc kubenswrapper[4810]: I0219 16:30:19.540523 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://61e1161022488800f1a95fde17c57eb209b943fcdd172d04938731e0ad552ce2" gracePeriod=600 Feb 19 16:30:20 crc kubenswrapper[4810]: I0219 16:30:20.045819 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="61e1161022488800f1a95fde17c57eb209b943fcdd172d04938731e0ad552ce2" exitCode=0 Feb 19 16:30:20 crc kubenswrapper[4810]: I0219 16:30:20.045887 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"61e1161022488800f1a95fde17c57eb209b943fcdd172d04938731e0ad552ce2"} Feb 19 16:30:20 crc kubenswrapper[4810]: I0219 16:30:20.046228 4810 scope.go:117] "RemoveContainer" containerID="fe77cba37c7a5825c8d865a2480ab7487bd5dce94a751361742d3c3e639d1af6" Feb 19 16:30:21 crc kubenswrapper[4810]: I0219 16:30:21.069997 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3"} Feb 19 16:30:44 crc kubenswrapper[4810]: I0219 16:30:44.483296 4810 scope.go:117] "RemoveContainer" containerID="ea1fa5fe82d5994ff114d9b04616fab9d73e059f9ec50ceb138445dbcf8a33cf" Feb 19 16:32:49 crc kubenswrapper[4810]: I0219 16:32:49.537559 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:32:49 crc kubenswrapper[4810]: I0219 16:32:49.538138 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:33:05 crc kubenswrapper[4810]: E0219 16:33:05.724932 4810 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.162:36526->38.102.83.162:41765: write tcp 38.102.83.162:36526->38.102.83.162:41765: write: broken pipe Feb 19 16:33:14 crc kubenswrapper[4810]: E0219 16:33:14.201382 4810 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.162:42186->38.102.83.162:41765: write tcp 38.102.83.162:42186->38.102.83.162:41765: write: broken pipe Feb 19 16:33:19 crc kubenswrapper[4810]: I0219 16:33:19.537198 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:33:19 crc kubenswrapper[4810]: I0219 16:33:19.537720 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:33:49 crc kubenswrapper[4810]: I0219 16:33:49.538206 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:33:49 crc kubenswrapper[4810]: I0219 16:33:49.538987 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:33:49 crc kubenswrapper[4810]: I0219 16:33:49.539069 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 16:33:49 crc kubenswrapper[4810]: I0219 16:33:49.540258 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 16:33:49 crc kubenswrapper[4810]: I0219 16:33:49.540378 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" gracePeriod=600 Feb 19 16:33:50 crc kubenswrapper[4810]: E0219 16:33:50.008418 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:33:50 crc kubenswrapper[4810]: I0219 16:33:50.414042 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" exitCode=0 Feb 19 16:33:50 crc kubenswrapper[4810]: I0219 16:33:50.414090 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3"} Feb 19 16:33:50 crc kubenswrapper[4810]: I0219 16:33:50.414134 4810 scope.go:117] "RemoveContainer" containerID="61e1161022488800f1a95fde17c57eb209b943fcdd172d04938731e0ad552ce2" Feb 19 16:33:50 crc kubenswrapper[4810]: I0219 16:33:50.414706 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:33:50 crc kubenswrapper[4810]: E0219 16:33:50.414939 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:34:04 crc kubenswrapper[4810]: I0219 16:34:04.439987 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:34:04 crc kubenswrapper[4810]: E0219 16:34:04.440819 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:34:15 crc kubenswrapper[4810]: I0219 16:34:15.439271 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:34:15 crc kubenswrapper[4810]: E0219 16:34:15.440190 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:34:27 crc kubenswrapper[4810]: I0219 16:34:27.442076 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:34:27 crc kubenswrapper[4810]: E0219 16:34:27.443122 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:34:40 crc kubenswrapper[4810]: I0219 16:34:40.439679 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:34:40 crc kubenswrapper[4810]: E0219 16:34:40.440584 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:34:51 crc kubenswrapper[4810]: I0219 16:34:51.446594 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:34:51 crc kubenswrapper[4810]: E0219 16:34:51.447273 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:35:02 crc kubenswrapper[4810]: I0219 16:35:02.440128 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:35:02 crc kubenswrapper[4810]: E0219 16:35:02.441217 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:35:16 crc kubenswrapper[4810]: I0219 16:35:16.440191 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:35:16 crc kubenswrapper[4810]: E0219 16:35:16.441450 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:35:31 crc kubenswrapper[4810]: I0219 16:35:31.447850 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:35:31 crc kubenswrapper[4810]: E0219 16:35:31.448778 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:35:44 crc kubenswrapper[4810]: I0219 16:35:44.440142 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:35:44 crc kubenswrapper[4810]: E0219 16:35:44.441085 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:35:59 crc kubenswrapper[4810]: I0219 16:35:59.439732 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:35:59 crc kubenswrapper[4810]: E0219 16:35:59.440826 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:36:13 crc kubenswrapper[4810]: I0219 16:36:13.439677 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:36:13 crc kubenswrapper[4810]: E0219 16:36:13.443920 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:36:24 crc kubenswrapper[4810]: I0219 16:36:24.438996 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:36:24 crc kubenswrapper[4810]: E0219 16:36:24.439931 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:36:35 crc kubenswrapper[4810]: I0219 16:36:35.439469 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:36:35 crc kubenswrapper[4810]: E0219 16:36:35.440314 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:36:49 crc kubenswrapper[4810]: I0219 16:36:49.440013 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:36:49 crc kubenswrapper[4810]: E0219 16:36:49.440645 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:37:01 crc kubenswrapper[4810]: I0219 16:37:01.448478 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:37:01 crc kubenswrapper[4810]: E0219 16:37:01.449485 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:37:15 crc kubenswrapper[4810]: I0219 16:37:15.455972 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:37:15 crc kubenswrapper[4810]: E0219 16:37:15.457212 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:37:27 crc kubenswrapper[4810]: I0219 16:37:27.439261 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:37:27 crc kubenswrapper[4810]: E0219 16:37:27.439878 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:37:40 crc kubenswrapper[4810]: I0219 16:37:40.440443 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:37:40 crc kubenswrapper[4810]: E0219 16:37:40.441459 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:37:52 crc kubenswrapper[4810]: I0219 16:37:52.439605 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:37:52 crc kubenswrapper[4810]: E0219 16:37:52.440657 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:38:07 crc kubenswrapper[4810]: I0219 16:38:07.439948 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:38:07 crc kubenswrapper[4810]: E0219 16:38:07.440684 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:38:22 crc kubenswrapper[4810]: I0219 16:38:22.439685 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:38:22 crc kubenswrapper[4810]: E0219 16:38:22.440397 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:38:33 crc kubenswrapper[4810]: I0219 16:38:33.440773 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:38:33 crc kubenswrapper[4810]: E0219 16:38:33.441896 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:38:48 crc kubenswrapper[4810]: I0219 16:38:48.439807 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:38:48 crc kubenswrapper[4810]: E0219 16:38:48.440560 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:39:00 crc kubenswrapper[4810]: I0219 16:39:00.441074 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:39:01 crc kubenswrapper[4810]: I0219 16:39:01.081811 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"b8edda1f9342dacaf59cc4da7ca5a5e6fa7b1be00b10f0e3774c25fffdb24625"} Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.612662 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fgvsh"] Feb 19 16:39:02 crc kubenswrapper[4810]: E0219 16:39:02.614196 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a" containerName="collect-profiles" Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.614212 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a" containerName="collect-profiles" Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.614543 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ec97af-4e4f-48fd-a173-d19b4c7c5d3a" containerName="collect-profiles" Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.616357 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.625453 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fgvsh"] Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.770707 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f034bea-4449-48b7-b4dc-359682644709-utilities\") pod \"certified-operators-fgvsh\" (UID: \"3f034bea-4449-48b7-b4dc-359682644709\") " pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.770806 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2bg6\" (UniqueName: \"kubernetes.io/projected/3f034bea-4449-48b7-b4dc-359682644709-kube-api-access-w2bg6\") pod \"certified-operators-fgvsh\" (UID: \"3f034bea-4449-48b7-b4dc-359682644709\") " pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.770933 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f034bea-4449-48b7-b4dc-359682644709-catalog-content\") pod \"certified-operators-fgvsh\" (UID: \"3f034bea-4449-48b7-b4dc-359682644709\") " pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.873349 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f034bea-4449-48b7-b4dc-359682644709-utilities\") pod \"certified-operators-fgvsh\" (UID: \"3f034bea-4449-48b7-b4dc-359682644709\") " pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.873428 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2bg6\" (UniqueName: \"kubernetes.io/projected/3f034bea-4449-48b7-b4dc-359682644709-kube-api-access-w2bg6\") pod \"certified-operators-fgvsh\" (UID: \"3f034bea-4449-48b7-b4dc-359682644709\") " pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.873505 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f034bea-4449-48b7-b4dc-359682644709-catalog-content\") pod \"certified-operators-fgvsh\" (UID: \"3f034bea-4449-48b7-b4dc-359682644709\") " pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.874363 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f034bea-4449-48b7-b4dc-359682644709-catalog-content\") pod \"certified-operators-fgvsh\" (UID: \"3f034bea-4449-48b7-b4dc-359682644709\") " pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.874521 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f034bea-4449-48b7-b4dc-359682644709-utilities\") pod \"certified-operators-fgvsh\" (UID: \"3f034bea-4449-48b7-b4dc-359682644709\") " pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.910913 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2bg6\" (UniqueName: \"kubernetes.io/projected/3f034bea-4449-48b7-b4dc-359682644709-kube-api-access-w2bg6\") pod \"certified-operators-fgvsh\" (UID: \"3f034bea-4449-48b7-b4dc-359682644709\") " pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:02 crc kubenswrapper[4810]: I0219 16:39:02.959171 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:03 crc kubenswrapper[4810]: I0219 16:39:03.468663 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fgvsh"] Feb 19 16:39:03 crc kubenswrapper[4810]: W0219 16:39:03.469568 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f034bea_4449_48b7_b4dc_359682644709.slice/crio-30f07e3fe9974f137e654fec2499095509becdecd424a45b6ebe9ca7430c3355 WatchSource:0}: Error finding container 30f07e3fe9974f137e654fec2499095509becdecd424a45b6ebe9ca7430c3355: Status 404 returned error can't find the container with id 30f07e3fe9974f137e654fec2499095509becdecd424a45b6ebe9ca7430c3355 Feb 19 16:39:04 crc kubenswrapper[4810]: I0219 16:39:04.130852 4810 generic.go:334] "Generic (PLEG): container finished" podID="3f034bea-4449-48b7-b4dc-359682644709" containerID="5b8771d19c21acb137923f1de719182c1c51622cce0781c46ea7cb43d5900fe4" exitCode=0 Feb 19 16:39:04 crc kubenswrapper[4810]: I0219 16:39:04.130941 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgvsh" event={"ID":"3f034bea-4449-48b7-b4dc-359682644709","Type":"ContainerDied","Data":"5b8771d19c21acb137923f1de719182c1c51622cce0781c46ea7cb43d5900fe4"} Feb 19 16:39:04 crc kubenswrapper[4810]: I0219 16:39:04.132568 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgvsh" event={"ID":"3f034bea-4449-48b7-b4dc-359682644709","Type":"ContainerStarted","Data":"30f07e3fe9974f137e654fec2499095509becdecd424a45b6ebe9ca7430c3355"} Feb 19 16:39:04 crc kubenswrapper[4810]: I0219 16:39:04.135282 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 16:39:05 crc kubenswrapper[4810]: I0219 16:39:05.149802 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgvsh" event={"ID":"3f034bea-4449-48b7-b4dc-359682644709","Type":"ContainerStarted","Data":"ab1039b6a551294a7e68715becc61601fbef14c361a510fef74eb845ad560452"} Feb 19 16:39:07 crc kubenswrapper[4810]: I0219 16:39:07.176572 4810 generic.go:334] "Generic (PLEG): container finished" podID="3f034bea-4449-48b7-b4dc-359682644709" containerID="ab1039b6a551294a7e68715becc61601fbef14c361a510fef74eb845ad560452" exitCode=0 Feb 19 16:39:07 crc kubenswrapper[4810]: I0219 16:39:07.176650 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgvsh" event={"ID":"3f034bea-4449-48b7-b4dc-359682644709","Type":"ContainerDied","Data":"ab1039b6a551294a7e68715becc61601fbef14c361a510fef74eb845ad560452"} Feb 19 16:39:08 crc kubenswrapper[4810]: I0219 16:39:08.197955 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgvsh" event={"ID":"3f034bea-4449-48b7-b4dc-359682644709","Type":"ContainerStarted","Data":"c1237ceb27bc050239ad7d857ad7d267187a75eddcf61de8348b5ad0d374a081"} Feb 19 16:39:08 crc kubenswrapper[4810]: I0219 16:39:08.239920 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fgvsh" podStartSLOduration=2.797637962 podStartE2EDuration="6.239890736s" podCreationTimestamp="2026-02-19 16:39:02 +0000 UTC" firstStartedPulling="2026-02-19 16:39:04.134241862 +0000 UTC m=+5373.616272026" lastFinishedPulling="2026-02-19 16:39:07.576494636 +0000 UTC m=+5377.058524800" observedRunningTime="2026-02-19 16:39:08.221538821 +0000 UTC m=+5377.703568995" watchObservedRunningTime="2026-02-19 16:39:08.239890736 +0000 UTC m=+5377.721920890" Feb 19 16:39:08 crc kubenswrapper[4810]: I0219 16:39:08.797626 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="c0ffb8ce-a356-4416-b96c-49db30ff1947" containerName="galera" probeResult="failure" output="command timed out" Feb 19 16:39:08 crc kubenswrapper[4810]: I0219 16:39:08.797667 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="c0ffb8ce-a356-4416-b96c-49db30ff1947" containerName="galera" probeResult="failure" output="command timed out" Feb 19 16:39:09 crc kubenswrapper[4810]: I0219 16:39:09.572637 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ch7bb"] Feb 19 16:39:09 crc kubenswrapper[4810]: I0219 16:39:09.593007 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:09 crc kubenswrapper[4810]: I0219 16:39:09.599794 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ch7bb"] Feb 19 16:39:09 crc kubenswrapper[4810]: I0219 16:39:09.744532 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d6df67-4f89-42b8-9e19-2812ece77996-catalog-content\") pod \"community-operators-ch7bb\" (UID: \"81d6df67-4f89-42b8-9e19-2812ece77996\") " pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:09 crc kubenswrapper[4810]: I0219 16:39:09.744639 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbsd6\" (UniqueName: \"kubernetes.io/projected/81d6df67-4f89-42b8-9e19-2812ece77996-kube-api-access-fbsd6\") pod \"community-operators-ch7bb\" (UID: \"81d6df67-4f89-42b8-9e19-2812ece77996\") " pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:09 crc kubenswrapper[4810]: I0219 16:39:09.744707 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d6df67-4f89-42b8-9e19-2812ece77996-utilities\") pod \"community-operators-ch7bb\" (UID: \"81d6df67-4f89-42b8-9e19-2812ece77996\") " pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:09 crc kubenswrapper[4810]: I0219 16:39:09.847577 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d6df67-4f89-42b8-9e19-2812ece77996-utilities\") pod \"community-operators-ch7bb\" (UID: \"81d6df67-4f89-42b8-9e19-2812ece77996\") " pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:09 crc kubenswrapper[4810]: I0219 16:39:09.847941 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d6df67-4f89-42b8-9e19-2812ece77996-catalog-content\") pod \"community-operators-ch7bb\" (UID: \"81d6df67-4f89-42b8-9e19-2812ece77996\") " pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:09 crc kubenswrapper[4810]: I0219 16:39:09.848041 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbsd6\" (UniqueName: \"kubernetes.io/projected/81d6df67-4f89-42b8-9e19-2812ece77996-kube-api-access-fbsd6\") pod \"community-operators-ch7bb\" (UID: \"81d6df67-4f89-42b8-9e19-2812ece77996\") " pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:09 crc kubenswrapper[4810]: I0219 16:39:09.848198 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d6df67-4f89-42b8-9e19-2812ece77996-utilities\") pod \"community-operators-ch7bb\" (UID: \"81d6df67-4f89-42b8-9e19-2812ece77996\") " pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:09 crc kubenswrapper[4810]: I0219 16:39:09.848447 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d6df67-4f89-42b8-9e19-2812ece77996-catalog-content\") pod \"community-operators-ch7bb\" (UID: \"81d6df67-4f89-42b8-9e19-2812ece77996\") " pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:09 crc kubenswrapper[4810]: I0219 16:39:09.878443 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbsd6\" (UniqueName: \"kubernetes.io/projected/81d6df67-4f89-42b8-9e19-2812ece77996-kube-api-access-fbsd6\") pod \"community-operators-ch7bb\" (UID: \"81d6df67-4f89-42b8-9e19-2812ece77996\") " pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:09 crc kubenswrapper[4810]: I0219 16:39:09.937761 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:10 crc kubenswrapper[4810]: I0219 16:39:10.455670 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ch7bb"] Feb 19 16:39:11 crc kubenswrapper[4810]: I0219 16:39:11.240456 4810 generic.go:334] "Generic (PLEG): container finished" podID="81d6df67-4f89-42b8-9e19-2812ece77996" containerID="e2710dde94bb0d603558ff62e2c9e0cf51744ccb3f0aaf41d2db02d41c8d6da0" exitCode=0 Feb 19 16:39:11 crc kubenswrapper[4810]: I0219 16:39:11.240558 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7bb" event={"ID":"81d6df67-4f89-42b8-9e19-2812ece77996","Type":"ContainerDied","Data":"e2710dde94bb0d603558ff62e2c9e0cf51744ccb3f0aaf41d2db02d41c8d6da0"} Feb 19 16:39:11 crc kubenswrapper[4810]: I0219 16:39:11.240893 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7bb" event={"ID":"81d6df67-4f89-42b8-9e19-2812ece77996","Type":"ContainerStarted","Data":"378c52d973ebcb5515fc8fc9204fa2b612ee44117d5bb74a714d7060313b49d8"} Feb 19 16:39:12 crc kubenswrapper[4810]: I0219 16:39:12.260640 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7bb" event={"ID":"81d6df67-4f89-42b8-9e19-2812ece77996","Type":"ContainerStarted","Data":"d903890d8b4001a65581b9af763d7137528b1ef0a5e31781ad32e07dbd687c83"} Feb 19 16:39:12 crc kubenswrapper[4810]: I0219 16:39:12.960420 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:12 crc kubenswrapper[4810]: I0219 16:39:12.960505 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:13 crc kubenswrapper[4810]: I0219 16:39:13.037108 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:13 crc kubenswrapper[4810]: I0219 16:39:13.357109 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:14 crc kubenswrapper[4810]: I0219 16:39:14.311581 4810 generic.go:334] "Generic (PLEG): container finished" podID="81d6df67-4f89-42b8-9e19-2812ece77996" containerID="d903890d8b4001a65581b9af763d7137528b1ef0a5e31781ad32e07dbd687c83" exitCode=0 Feb 19 16:39:14 crc kubenswrapper[4810]: I0219 16:39:14.313124 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7bb" event={"ID":"81d6df67-4f89-42b8-9e19-2812ece77996","Type":"ContainerDied","Data":"d903890d8b4001a65581b9af763d7137528b1ef0a5e31781ad32e07dbd687c83"} Feb 19 16:39:15 crc kubenswrapper[4810]: I0219 16:39:15.327430 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7bb" event={"ID":"81d6df67-4f89-42b8-9e19-2812ece77996","Type":"ContainerStarted","Data":"7a641592bba10da8ad71aed1fef8b54b79dde8c98967b7fbae699b4f3ead5673"} Feb 19 16:39:15 crc kubenswrapper[4810]: I0219 16:39:15.357597 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ch7bb" podStartSLOduration=2.914650821 podStartE2EDuration="6.357577423s" podCreationTimestamp="2026-02-19 16:39:09 +0000 UTC" firstStartedPulling="2026-02-19 16:39:11.245960181 +0000 UTC m=+5380.727990335" lastFinishedPulling="2026-02-19 16:39:14.688886783 +0000 UTC m=+5384.170916937" observedRunningTime="2026-02-19 16:39:15.349829341 +0000 UTC m=+5384.831859495" watchObservedRunningTime="2026-02-19 16:39:15.357577423 +0000 UTC m=+5384.839607547" Feb 19 16:39:15 crc kubenswrapper[4810]: I0219 16:39:15.375427 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fgvsh"] Feb 19 16:39:15 crc kubenswrapper[4810]: I0219 16:39:15.375802 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fgvsh" podUID="3f034bea-4449-48b7-b4dc-359682644709" containerName="registry-server" containerID="cri-o://c1237ceb27bc050239ad7d857ad7d267187a75eddcf61de8348b5ad0d374a081" gracePeriod=2 Feb 19 16:39:15 crc kubenswrapper[4810]: I0219 16:39:15.915987 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.102100 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2bg6\" (UniqueName: \"kubernetes.io/projected/3f034bea-4449-48b7-b4dc-359682644709-kube-api-access-w2bg6\") pod \"3f034bea-4449-48b7-b4dc-359682644709\" (UID: \"3f034bea-4449-48b7-b4dc-359682644709\") " Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.102310 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f034bea-4449-48b7-b4dc-359682644709-catalog-content\") pod \"3f034bea-4449-48b7-b4dc-359682644709\" (UID: \"3f034bea-4449-48b7-b4dc-359682644709\") " Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.102358 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f034bea-4449-48b7-b4dc-359682644709-utilities\") pod \"3f034bea-4449-48b7-b4dc-359682644709\" (UID: \"3f034bea-4449-48b7-b4dc-359682644709\") " Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.110535 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f034bea-4449-48b7-b4dc-359682644709-kube-api-access-w2bg6" (OuterVolumeSpecName: "kube-api-access-w2bg6") pod "3f034bea-4449-48b7-b4dc-359682644709" (UID: "3f034bea-4449-48b7-b4dc-359682644709"). InnerVolumeSpecName "kube-api-access-w2bg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.123698 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f034bea-4449-48b7-b4dc-359682644709-utilities" (OuterVolumeSpecName: "utilities") pod "3f034bea-4449-48b7-b4dc-359682644709" (UID: "3f034bea-4449-48b7-b4dc-359682644709"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.170061 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f034bea-4449-48b7-b4dc-359682644709-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f034bea-4449-48b7-b4dc-359682644709" (UID: "3f034bea-4449-48b7-b4dc-359682644709"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.204973 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2bg6\" (UniqueName: \"kubernetes.io/projected/3f034bea-4449-48b7-b4dc-359682644709-kube-api-access-w2bg6\") on node \"crc\" DevicePath \"\"" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.205394 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f034bea-4449-48b7-b4dc-359682644709-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.205414 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f034bea-4449-48b7-b4dc-359682644709-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.340898 4810 generic.go:334] "Generic (PLEG): container finished" podID="3f034bea-4449-48b7-b4dc-359682644709" containerID="c1237ceb27bc050239ad7d857ad7d267187a75eddcf61de8348b5ad0d374a081" exitCode=0 Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.340963 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgvsh" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.340994 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgvsh" event={"ID":"3f034bea-4449-48b7-b4dc-359682644709","Type":"ContainerDied","Data":"c1237ceb27bc050239ad7d857ad7d267187a75eddcf61de8348b5ad0d374a081"} Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.341077 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgvsh" event={"ID":"3f034bea-4449-48b7-b4dc-359682644709","Type":"ContainerDied","Data":"30f07e3fe9974f137e654fec2499095509becdecd424a45b6ebe9ca7430c3355"} Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.341108 4810 scope.go:117] "RemoveContainer" containerID="c1237ceb27bc050239ad7d857ad7d267187a75eddcf61de8348b5ad0d374a081" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.396432 4810 scope.go:117] "RemoveContainer" containerID="ab1039b6a551294a7e68715becc61601fbef14c361a510fef74eb845ad560452" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.403183 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fgvsh"] Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.412476 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fgvsh"] Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.435465 4810 scope.go:117] "RemoveContainer" containerID="5b8771d19c21acb137923f1de719182c1c51622cce0781c46ea7cb43d5900fe4" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.485792 4810 scope.go:117] "RemoveContainer" containerID="c1237ceb27bc050239ad7d857ad7d267187a75eddcf61de8348b5ad0d374a081" Feb 19 16:39:16 crc kubenswrapper[4810]: E0219 16:39:16.486213 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1237ceb27bc050239ad7d857ad7d267187a75eddcf61de8348b5ad0d374a081\": container with ID starting with c1237ceb27bc050239ad7d857ad7d267187a75eddcf61de8348b5ad0d374a081 not found: ID does not exist" containerID="c1237ceb27bc050239ad7d857ad7d267187a75eddcf61de8348b5ad0d374a081" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.486348 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1237ceb27bc050239ad7d857ad7d267187a75eddcf61de8348b5ad0d374a081"} err="failed to get container status \"c1237ceb27bc050239ad7d857ad7d267187a75eddcf61de8348b5ad0d374a081\": rpc error: code = NotFound desc = could not find container \"c1237ceb27bc050239ad7d857ad7d267187a75eddcf61de8348b5ad0d374a081\": container with ID starting with c1237ceb27bc050239ad7d857ad7d267187a75eddcf61de8348b5ad0d374a081 not found: ID does not exist" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.486452 4810 scope.go:117] "RemoveContainer" containerID="ab1039b6a551294a7e68715becc61601fbef14c361a510fef74eb845ad560452" Feb 19 16:39:16 crc kubenswrapper[4810]: E0219 16:39:16.486846 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab1039b6a551294a7e68715becc61601fbef14c361a510fef74eb845ad560452\": container with ID starting with ab1039b6a551294a7e68715becc61601fbef14c361a510fef74eb845ad560452 not found: ID does not exist" containerID="ab1039b6a551294a7e68715becc61601fbef14c361a510fef74eb845ad560452" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.486977 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1039b6a551294a7e68715becc61601fbef14c361a510fef74eb845ad560452"} err="failed to get container status \"ab1039b6a551294a7e68715becc61601fbef14c361a510fef74eb845ad560452\": rpc error: code = NotFound desc = could not find container \"ab1039b6a551294a7e68715becc61601fbef14c361a510fef74eb845ad560452\": container with ID starting with ab1039b6a551294a7e68715becc61601fbef14c361a510fef74eb845ad560452 not found: ID does not exist" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.487063 4810 scope.go:117] "RemoveContainer" containerID="5b8771d19c21acb137923f1de719182c1c51622cce0781c46ea7cb43d5900fe4" Feb 19 16:39:16 crc kubenswrapper[4810]: E0219 16:39:16.487466 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8771d19c21acb137923f1de719182c1c51622cce0781c46ea7cb43d5900fe4\": container with ID starting with 5b8771d19c21acb137923f1de719182c1c51622cce0781c46ea7cb43d5900fe4 not found: ID does not exist" containerID="5b8771d19c21acb137923f1de719182c1c51622cce0781c46ea7cb43d5900fe4" Feb 19 16:39:16 crc kubenswrapper[4810]: I0219 16:39:16.487568 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8771d19c21acb137923f1de719182c1c51622cce0781c46ea7cb43d5900fe4"} err="failed to get container status \"5b8771d19c21acb137923f1de719182c1c51622cce0781c46ea7cb43d5900fe4\": rpc error: code = NotFound desc = could not find container \"5b8771d19c21acb137923f1de719182c1c51622cce0781c46ea7cb43d5900fe4\": container with ID starting with 5b8771d19c21acb137923f1de719182c1c51622cce0781c46ea7cb43d5900fe4 not found: ID does not exist" Feb 19 16:39:17 crc kubenswrapper[4810]: I0219 16:39:17.486926 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f034bea-4449-48b7-b4dc-359682644709" path="/var/lib/kubelet/pods/3f034bea-4449-48b7-b4dc-359682644709/volumes" Feb 19 16:39:19 crc kubenswrapper[4810]: I0219 16:39:19.938517 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:19 crc kubenswrapper[4810]: I0219 16:39:19.939158 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:21 crc kubenswrapper[4810]: I0219 16:39:21.154342 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ch7bb" podUID="81d6df67-4f89-42b8-9e19-2812ece77996" containerName="registry-server" probeResult="failure" output=< Feb 19 16:39:21 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 16:39:21 crc kubenswrapper[4810]: > Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.439496 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w474k"] Feb 19 16:39:26 crc kubenswrapper[4810]: E0219 16:39:26.442438 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f034bea-4449-48b7-b4dc-359682644709" containerName="extract-utilities" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.442489 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f034bea-4449-48b7-b4dc-359682644709" containerName="extract-utilities" Feb 19 16:39:26 crc kubenswrapper[4810]: E0219 16:39:26.442526 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f034bea-4449-48b7-b4dc-359682644709" containerName="extract-content" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.442540 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f034bea-4449-48b7-b4dc-359682644709" containerName="extract-content" Feb 19 16:39:26 crc kubenswrapper[4810]: E0219 16:39:26.442580 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f034bea-4449-48b7-b4dc-359682644709" containerName="registry-server" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.442593 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f034bea-4449-48b7-b4dc-359682644709" containerName="registry-server" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.442975 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f034bea-4449-48b7-b4dc-359682644709" containerName="registry-server" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.445855 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.501839 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w474k"] Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.549988 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7403ee1-3e5c-4f08-a3ab-4eca75254405-catalog-content\") pod \"redhat-operators-w474k\" (UID: \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\") " pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.550063 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7403ee1-3e5c-4f08-a3ab-4eca75254405-utilities\") pod \"redhat-operators-w474k\" (UID: \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\") " pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.550164 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqsdk\" (UniqueName: \"kubernetes.io/projected/c7403ee1-3e5c-4f08-a3ab-4eca75254405-kube-api-access-bqsdk\") pod \"redhat-operators-w474k\" (UID: \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\") " pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.652381 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7403ee1-3e5c-4f08-a3ab-4eca75254405-catalog-content\") pod \"redhat-operators-w474k\" (UID: \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\") " pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.652451 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7403ee1-3e5c-4f08-a3ab-4eca75254405-utilities\") pod \"redhat-operators-w474k\" (UID: \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\") " pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.652516 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqsdk\" (UniqueName: \"kubernetes.io/projected/c7403ee1-3e5c-4f08-a3ab-4eca75254405-kube-api-access-bqsdk\") pod \"redhat-operators-w474k\" (UID: \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\") " pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.652985 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7403ee1-3e5c-4f08-a3ab-4eca75254405-catalog-content\") pod \"redhat-operators-w474k\" (UID: \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\") " pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.653199 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7403ee1-3e5c-4f08-a3ab-4eca75254405-utilities\") pod \"redhat-operators-w474k\" (UID: \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\") " pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.673436 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqsdk\" (UniqueName: \"kubernetes.io/projected/c7403ee1-3e5c-4f08-a3ab-4eca75254405-kube-api-access-bqsdk\") pod \"redhat-operators-w474k\" (UID: \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\") " pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:26 crc kubenswrapper[4810]: I0219 16:39:26.794165 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:27 crc kubenswrapper[4810]: I0219 16:39:27.260074 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w474k"] Feb 19 16:39:27 crc kubenswrapper[4810]: I0219 16:39:27.475972 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w474k" event={"ID":"c7403ee1-3e5c-4f08-a3ab-4eca75254405","Type":"ContainerStarted","Data":"442b7ce65bc1412aed55a2335f985bfa36080044176d83cf3f6041d9b930cf3e"} Feb 19 16:39:28 crc kubenswrapper[4810]: I0219 16:39:28.493046 4810 generic.go:334] "Generic (PLEG): container finished" podID="c7403ee1-3e5c-4f08-a3ab-4eca75254405" containerID="da5e11ed127d0033e661e20ce3da73f34f371a2138b894304f1b033ea38ce0ec" exitCode=0 Feb 19 16:39:28 crc kubenswrapper[4810]: I0219 16:39:28.493147 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w474k" event={"ID":"c7403ee1-3e5c-4f08-a3ab-4eca75254405","Type":"ContainerDied","Data":"da5e11ed127d0033e661e20ce3da73f34f371a2138b894304f1b033ea38ce0ec"} Feb 19 16:39:30 crc kubenswrapper[4810]: I0219 16:39:30.023660 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:30 crc kubenswrapper[4810]: I0219 16:39:30.099675 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:30 crc kubenswrapper[4810]: I0219 16:39:30.522124 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w474k" event={"ID":"c7403ee1-3e5c-4f08-a3ab-4eca75254405","Type":"ContainerStarted","Data":"5d843baf96eed557854d6793bfaf74426aa5aeab7dc682913f3b750090dfd60c"} Feb 19 16:39:31 crc kubenswrapper[4810]: I0219 16:39:31.806232 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ch7bb"] Feb 19 16:39:31 crc kubenswrapper[4810]: I0219 16:39:31.809156 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ch7bb" podUID="81d6df67-4f89-42b8-9e19-2812ece77996" containerName="registry-server" containerID="cri-o://7a641592bba10da8ad71aed1fef8b54b79dde8c98967b7fbae699b4f3ead5673" gracePeriod=2 Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.333214 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.423683 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbsd6\" (UniqueName: \"kubernetes.io/projected/81d6df67-4f89-42b8-9e19-2812ece77996-kube-api-access-fbsd6\") pod \"81d6df67-4f89-42b8-9e19-2812ece77996\" (UID: \"81d6df67-4f89-42b8-9e19-2812ece77996\") " Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.424019 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d6df67-4f89-42b8-9e19-2812ece77996-utilities\") pod \"81d6df67-4f89-42b8-9e19-2812ece77996\" (UID: \"81d6df67-4f89-42b8-9e19-2812ece77996\") " Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.424119 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d6df67-4f89-42b8-9e19-2812ece77996-catalog-content\") pod \"81d6df67-4f89-42b8-9e19-2812ece77996\" (UID: \"81d6df67-4f89-42b8-9e19-2812ece77996\") " Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.424624 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d6df67-4f89-42b8-9e19-2812ece77996-utilities" (OuterVolumeSpecName: "utilities") pod "81d6df67-4f89-42b8-9e19-2812ece77996" (UID: "81d6df67-4f89-42b8-9e19-2812ece77996"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.424986 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d6df67-4f89-42b8-9e19-2812ece77996-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.431005 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d6df67-4f89-42b8-9e19-2812ece77996-kube-api-access-fbsd6" (OuterVolumeSpecName: "kube-api-access-fbsd6") pod "81d6df67-4f89-42b8-9e19-2812ece77996" (UID: "81d6df67-4f89-42b8-9e19-2812ece77996"). InnerVolumeSpecName "kube-api-access-fbsd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.508729 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d6df67-4f89-42b8-9e19-2812ece77996-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81d6df67-4f89-42b8-9e19-2812ece77996" (UID: "81d6df67-4f89-42b8-9e19-2812ece77996"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.527192 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d6df67-4f89-42b8-9e19-2812ece77996-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.527217 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbsd6\" (UniqueName: \"kubernetes.io/projected/81d6df67-4f89-42b8-9e19-2812ece77996-kube-api-access-fbsd6\") on node \"crc\" DevicePath \"\"" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.555777 4810 generic.go:334] "Generic (PLEG): container finished" podID="81d6df67-4f89-42b8-9e19-2812ece77996" containerID="7a641592bba10da8ad71aed1fef8b54b79dde8c98967b7fbae699b4f3ead5673" exitCode=0 Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.555855 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7bb" event={"ID":"81d6df67-4f89-42b8-9e19-2812ece77996","Type":"ContainerDied","Data":"7a641592bba10da8ad71aed1fef8b54b79dde8c98967b7fbae699b4f3ead5673"} Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.555931 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ch7bb" event={"ID":"81d6df67-4f89-42b8-9e19-2812ece77996","Type":"ContainerDied","Data":"378c52d973ebcb5515fc8fc9204fa2b612ee44117d5bb74a714d7060313b49d8"} Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.555969 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ch7bb" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.555983 4810 scope.go:117] "RemoveContainer" containerID="7a641592bba10da8ad71aed1fef8b54b79dde8c98967b7fbae699b4f3ead5673" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.579229 4810 scope.go:117] "RemoveContainer" containerID="d903890d8b4001a65581b9af763d7137528b1ef0a5e31781ad32e07dbd687c83" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.619068 4810 scope.go:117] "RemoveContainer" containerID="e2710dde94bb0d603558ff62e2c9e0cf51744ccb3f0aaf41d2db02d41c8d6da0" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.627239 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ch7bb"] Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.641580 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ch7bb"] Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.650466 4810 scope.go:117] "RemoveContainer" containerID="7a641592bba10da8ad71aed1fef8b54b79dde8c98967b7fbae699b4f3ead5673" Feb 19 16:39:33 crc kubenswrapper[4810]: E0219 16:39:33.650849 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a641592bba10da8ad71aed1fef8b54b79dde8c98967b7fbae699b4f3ead5673\": container with ID starting with 7a641592bba10da8ad71aed1fef8b54b79dde8c98967b7fbae699b4f3ead5673 not found: ID does not exist" containerID="7a641592bba10da8ad71aed1fef8b54b79dde8c98967b7fbae699b4f3ead5673" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.650888 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a641592bba10da8ad71aed1fef8b54b79dde8c98967b7fbae699b4f3ead5673"} err="failed to get container status \"7a641592bba10da8ad71aed1fef8b54b79dde8c98967b7fbae699b4f3ead5673\": rpc error: code = NotFound desc = could not find container \"7a641592bba10da8ad71aed1fef8b54b79dde8c98967b7fbae699b4f3ead5673\": container with ID starting with 7a641592bba10da8ad71aed1fef8b54b79dde8c98967b7fbae699b4f3ead5673 not found: ID does not exist" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.650922 4810 scope.go:117] "RemoveContainer" containerID="d903890d8b4001a65581b9af763d7137528b1ef0a5e31781ad32e07dbd687c83" Feb 19 16:39:33 crc kubenswrapper[4810]: E0219 16:39:33.651288 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d903890d8b4001a65581b9af763d7137528b1ef0a5e31781ad32e07dbd687c83\": container with ID starting with d903890d8b4001a65581b9af763d7137528b1ef0a5e31781ad32e07dbd687c83 not found: ID does not exist" containerID="d903890d8b4001a65581b9af763d7137528b1ef0a5e31781ad32e07dbd687c83" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.651305 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d903890d8b4001a65581b9af763d7137528b1ef0a5e31781ad32e07dbd687c83"} err="failed to get container status \"d903890d8b4001a65581b9af763d7137528b1ef0a5e31781ad32e07dbd687c83\": rpc error: code = NotFound desc = could not find container \"d903890d8b4001a65581b9af763d7137528b1ef0a5e31781ad32e07dbd687c83\": container with ID starting with d903890d8b4001a65581b9af763d7137528b1ef0a5e31781ad32e07dbd687c83 not found: ID does not exist" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.651317 4810 scope.go:117] "RemoveContainer" containerID="e2710dde94bb0d603558ff62e2c9e0cf51744ccb3f0aaf41d2db02d41c8d6da0" Feb 19 16:39:33 crc kubenswrapper[4810]: E0219 16:39:33.651760 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2710dde94bb0d603558ff62e2c9e0cf51744ccb3f0aaf41d2db02d41c8d6da0\": container with ID starting with e2710dde94bb0d603558ff62e2c9e0cf51744ccb3f0aaf41d2db02d41c8d6da0 not found: ID does not exist" containerID="e2710dde94bb0d603558ff62e2c9e0cf51744ccb3f0aaf41d2db02d41c8d6da0" Feb 19 16:39:33 crc kubenswrapper[4810]: I0219 16:39:33.651780 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2710dde94bb0d603558ff62e2c9e0cf51744ccb3f0aaf41d2db02d41c8d6da0"} err="failed to get container status \"e2710dde94bb0d603558ff62e2c9e0cf51744ccb3f0aaf41d2db02d41c8d6da0\": rpc error: code = NotFound desc = could not find container \"e2710dde94bb0d603558ff62e2c9e0cf51744ccb3f0aaf41d2db02d41c8d6da0\": container with ID starting with e2710dde94bb0d603558ff62e2c9e0cf51744ccb3f0aaf41d2db02d41c8d6da0 not found: ID does not exist" Feb 19 16:39:34 crc kubenswrapper[4810]: I0219 16:39:34.577149 4810 generic.go:334] "Generic (PLEG): container finished" podID="c7403ee1-3e5c-4f08-a3ab-4eca75254405" containerID="5d843baf96eed557854d6793bfaf74426aa5aeab7dc682913f3b750090dfd60c" exitCode=0 Feb 19 16:39:34 crc kubenswrapper[4810]: I0219 16:39:34.577433 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w474k" event={"ID":"c7403ee1-3e5c-4f08-a3ab-4eca75254405","Type":"ContainerDied","Data":"5d843baf96eed557854d6793bfaf74426aa5aeab7dc682913f3b750090dfd60c"} Feb 19 16:39:35 crc kubenswrapper[4810]: I0219 16:39:35.459714 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d6df67-4f89-42b8-9e19-2812ece77996" path="/var/lib/kubelet/pods/81d6df67-4f89-42b8-9e19-2812ece77996/volumes" Feb 19 16:39:36 crc kubenswrapper[4810]: I0219 16:39:36.605405 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w474k" event={"ID":"c7403ee1-3e5c-4f08-a3ab-4eca75254405","Type":"ContainerStarted","Data":"a968f276f3edfffb95abada6e7316b27737144bb357510bb73c8efb6a9c6d836"} Feb 19 16:39:36 crc kubenswrapper[4810]: I0219 16:39:36.643752 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w474k" podStartSLOduration=4.141139902 podStartE2EDuration="10.643724776s" podCreationTimestamp="2026-02-19 16:39:26 +0000 UTC" firstStartedPulling="2026-02-19 16:39:28.495777708 +0000 UTC m=+5397.977807852" lastFinishedPulling="2026-02-19 16:39:34.998362562 +0000 UTC m=+5404.480392726" observedRunningTime="2026-02-19 16:39:36.634797245 +0000 UTC m=+5406.116827369" watchObservedRunningTime="2026-02-19 16:39:36.643724776 +0000 UTC m=+5406.125754930" Feb 19 16:39:36 crc kubenswrapper[4810]: I0219 16:39:36.794578 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:36 crc kubenswrapper[4810]: I0219 16:39:36.794620 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:37 crc kubenswrapper[4810]: I0219 16:39:37.864666 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w474k" podUID="c7403ee1-3e5c-4f08-a3ab-4eca75254405" containerName="registry-server" probeResult="failure" output=< Feb 19 16:39:37 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 16:39:37 crc kubenswrapper[4810]: > Feb 19 16:39:46 crc kubenswrapper[4810]: I0219 16:39:46.874933 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:46 crc kubenswrapper[4810]: I0219 16:39:46.957960 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:47 crc kubenswrapper[4810]: I0219 16:39:47.132996 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w474k"] Feb 19 16:39:48 crc kubenswrapper[4810]: I0219 16:39:48.754437 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w474k" podUID="c7403ee1-3e5c-4f08-a3ab-4eca75254405" containerName="registry-server" containerID="cri-o://a968f276f3edfffb95abada6e7316b27737144bb357510bb73c8efb6a9c6d836" gracePeriod=2 Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.327828 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.445980 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7403ee1-3e5c-4f08-a3ab-4eca75254405-catalog-content\") pod \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\" (UID: \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\") " Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.446303 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqsdk\" (UniqueName: \"kubernetes.io/projected/c7403ee1-3e5c-4f08-a3ab-4eca75254405-kube-api-access-bqsdk\") pod \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\" (UID: \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\") " Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.447859 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7403ee1-3e5c-4f08-a3ab-4eca75254405-utilities\") pod \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\" (UID: \"c7403ee1-3e5c-4f08-a3ab-4eca75254405\") " Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.448894 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7403ee1-3e5c-4f08-a3ab-4eca75254405-utilities" (OuterVolumeSpecName: "utilities") pod "c7403ee1-3e5c-4f08-a3ab-4eca75254405" (UID: "c7403ee1-3e5c-4f08-a3ab-4eca75254405"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.449768 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7403ee1-3e5c-4f08-a3ab-4eca75254405-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.455520 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7403ee1-3e5c-4f08-a3ab-4eca75254405-kube-api-access-bqsdk" (OuterVolumeSpecName: "kube-api-access-bqsdk") pod "c7403ee1-3e5c-4f08-a3ab-4eca75254405" (UID: "c7403ee1-3e5c-4f08-a3ab-4eca75254405"). InnerVolumeSpecName "kube-api-access-bqsdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.552055 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqsdk\" (UniqueName: \"kubernetes.io/projected/c7403ee1-3e5c-4f08-a3ab-4eca75254405-kube-api-access-bqsdk\") on node \"crc\" DevicePath \"\"" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.584769 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7403ee1-3e5c-4f08-a3ab-4eca75254405-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7403ee1-3e5c-4f08-a3ab-4eca75254405" (UID: "c7403ee1-3e5c-4f08-a3ab-4eca75254405"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.653602 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7403ee1-3e5c-4f08-a3ab-4eca75254405-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.766604 4810 generic.go:334] "Generic (PLEG): container finished" podID="c7403ee1-3e5c-4f08-a3ab-4eca75254405" containerID="a968f276f3edfffb95abada6e7316b27737144bb357510bb73c8efb6a9c6d836" exitCode=0 Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.766651 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w474k" event={"ID":"c7403ee1-3e5c-4f08-a3ab-4eca75254405","Type":"ContainerDied","Data":"a968f276f3edfffb95abada6e7316b27737144bb357510bb73c8efb6a9c6d836"} Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.766690 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w474k" event={"ID":"c7403ee1-3e5c-4f08-a3ab-4eca75254405","Type":"ContainerDied","Data":"442b7ce65bc1412aed55a2335f985bfa36080044176d83cf3f6041d9b930cf3e"} Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.766711 4810 scope.go:117] "RemoveContainer" containerID="a968f276f3edfffb95abada6e7316b27737144bb357510bb73c8efb6a9c6d836" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.766711 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w474k" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.794203 4810 scope.go:117] "RemoveContainer" containerID="5d843baf96eed557854d6793bfaf74426aa5aeab7dc682913f3b750090dfd60c" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.817377 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w474k"] Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.828359 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w474k"] Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.833791 4810 scope.go:117] "RemoveContainer" containerID="da5e11ed127d0033e661e20ce3da73f34f371a2138b894304f1b033ea38ce0ec" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.889028 4810 scope.go:117] "RemoveContainer" containerID="a968f276f3edfffb95abada6e7316b27737144bb357510bb73c8efb6a9c6d836" Feb 19 16:39:49 crc kubenswrapper[4810]: E0219 16:39:49.890646 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a968f276f3edfffb95abada6e7316b27737144bb357510bb73c8efb6a9c6d836\": container with ID starting with a968f276f3edfffb95abada6e7316b27737144bb357510bb73c8efb6a9c6d836 not found: ID does not exist" containerID="a968f276f3edfffb95abada6e7316b27737144bb357510bb73c8efb6a9c6d836" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.890687 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a968f276f3edfffb95abada6e7316b27737144bb357510bb73c8efb6a9c6d836"} err="failed to get container status \"a968f276f3edfffb95abada6e7316b27737144bb357510bb73c8efb6a9c6d836\": rpc error: code = NotFound desc = could not find container \"a968f276f3edfffb95abada6e7316b27737144bb357510bb73c8efb6a9c6d836\": container with ID starting with a968f276f3edfffb95abada6e7316b27737144bb357510bb73c8efb6a9c6d836 not found: ID does not exist" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.890715 4810 scope.go:117] "RemoveContainer" containerID="5d843baf96eed557854d6793bfaf74426aa5aeab7dc682913f3b750090dfd60c" Feb 19 16:39:49 crc kubenswrapper[4810]: E0219 16:39:49.891398 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d843baf96eed557854d6793bfaf74426aa5aeab7dc682913f3b750090dfd60c\": container with ID starting with 5d843baf96eed557854d6793bfaf74426aa5aeab7dc682913f3b750090dfd60c not found: ID does not exist" containerID="5d843baf96eed557854d6793bfaf74426aa5aeab7dc682913f3b750090dfd60c" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.891430 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d843baf96eed557854d6793bfaf74426aa5aeab7dc682913f3b750090dfd60c"} err="failed to get container status \"5d843baf96eed557854d6793bfaf74426aa5aeab7dc682913f3b750090dfd60c\": rpc error: code = NotFound desc = could not find container \"5d843baf96eed557854d6793bfaf74426aa5aeab7dc682913f3b750090dfd60c\": container with ID starting with 5d843baf96eed557854d6793bfaf74426aa5aeab7dc682913f3b750090dfd60c not found: ID does not exist" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.891449 4810 scope.go:117] "RemoveContainer" containerID="da5e11ed127d0033e661e20ce3da73f34f371a2138b894304f1b033ea38ce0ec" Feb 19 16:39:49 crc kubenswrapper[4810]: E0219 16:39:49.891910 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da5e11ed127d0033e661e20ce3da73f34f371a2138b894304f1b033ea38ce0ec\": container with ID starting with da5e11ed127d0033e661e20ce3da73f34f371a2138b894304f1b033ea38ce0ec not found: ID does not exist" containerID="da5e11ed127d0033e661e20ce3da73f34f371a2138b894304f1b033ea38ce0ec" Feb 19 16:39:49 crc kubenswrapper[4810]: I0219 16:39:49.891943 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da5e11ed127d0033e661e20ce3da73f34f371a2138b894304f1b033ea38ce0ec"} err="failed to get container status \"da5e11ed127d0033e661e20ce3da73f34f371a2138b894304f1b033ea38ce0ec\": rpc error: code = NotFound desc = could not find container \"da5e11ed127d0033e661e20ce3da73f34f371a2138b894304f1b033ea38ce0ec\": container with ID starting with da5e11ed127d0033e661e20ce3da73f34f371a2138b894304f1b033ea38ce0ec not found: ID does not exist" Feb 19 16:39:51 crc kubenswrapper[4810]: I0219 16:39:51.461236 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7403ee1-3e5c-4f08-a3ab-4eca75254405" path="/var/lib/kubelet/pods/c7403ee1-3e5c-4f08-a3ab-4eca75254405/volumes" Feb 19 16:41:02 crc kubenswrapper[4810]: I0219 16:41:02.696856 4810 generic.go:334] "Generic (PLEG): container finished" podID="a4c017a9-c049-4baa-acc0-e08a25437c90" containerID="7c1119346fc25e4a8eb25c191fe4eed4ab3589389debf84e048e3e376479d897" exitCode=0 Feb 19 16:41:02 crc kubenswrapper[4810]: I0219 16:41:02.696976 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a4c017a9-c049-4baa-acc0-e08a25437c90","Type":"ContainerDied","Data":"7c1119346fc25e4a8eb25c191fe4eed4ab3589389debf84e048e3e376479d897"} Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.088408 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.150911 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a4c017a9-c049-4baa-acc0-e08a25437c90-test-operator-ephemeral-workdir\") pod \"a4c017a9-c049-4baa-acc0-e08a25437c90\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.150998 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a4c017a9-c049-4baa-acc0-e08a25437c90-test-operator-ephemeral-temporary\") pod \"a4c017a9-c049-4baa-acc0-e08a25437c90\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.151029 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a4c017a9-c049-4baa-acc0-e08a25437c90\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.151111 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv5x5\" (UniqueName: \"kubernetes.io/projected/a4c017a9-c049-4baa-acc0-e08a25437c90-kube-api-access-tv5x5\") pod \"a4c017a9-c049-4baa-acc0-e08a25437c90\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.151217 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4c017a9-c049-4baa-acc0-e08a25437c90-openstack-config\") pod \"a4c017a9-c049-4baa-acc0-e08a25437c90\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.151355 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-ssh-key\") pod \"a4c017a9-c049-4baa-acc0-e08a25437c90\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.151473 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-ca-certs\") pod \"a4c017a9-c049-4baa-acc0-e08a25437c90\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.151500 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-openstack-config-secret\") pod \"a4c017a9-c049-4baa-acc0-e08a25437c90\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.151525 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4c017a9-c049-4baa-acc0-e08a25437c90-config-data\") pod \"a4c017a9-c049-4baa-acc0-e08a25437c90\" (UID: \"a4c017a9-c049-4baa-acc0-e08a25437c90\") " Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.152582 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4c017a9-c049-4baa-acc0-e08a25437c90-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a4c017a9-c049-4baa-acc0-e08a25437c90" (UID: "a4c017a9-c049-4baa-acc0-e08a25437c90"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.152867 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4c017a9-c049-4baa-acc0-e08a25437c90-config-data" (OuterVolumeSpecName: "config-data") pod "a4c017a9-c049-4baa-acc0-e08a25437c90" (UID: "a4c017a9-c049-4baa-acc0-e08a25437c90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.157993 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4c017a9-c049-4baa-acc0-e08a25437c90-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a4c017a9-c049-4baa-acc0-e08a25437c90" (UID: "a4c017a9-c049-4baa-acc0-e08a25437c90"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.165892 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c017a9-c049-4baa-acc0-e08a25437c90-kube-api-access-tv5x5" (OuterVolumeSpecName: "kube-api-access-tv5x5") pod "a4c017a9-c049-4baa-acc0-e08a25437c90" (UID: "a4c017a9-c049-4baa-acc0-e08a25437c90"). InnerVolumeSpecName "kube-api-access-tv5x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.168663 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a4c017a9-c049-4baa-acc0-e08a25437c90" (UID: "a4c017a9-c049-4baa-acc0-e08a25437c90"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.194342 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a4c017a9-c049-4baa-acc0-e08a25437c90" (UID: "a4c017a9-c049-4baa-acc0-e08a25437c90"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.206673 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a4c017a9-c049-4baa-acc0-e08a25437c90" (UID: "a4c017a9-c049-4baa-acc0-e08a25437c90"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.220547 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a4c017a9-c049-4baa-acc0-e08a25437c90" (UID: "a4c017a9-c049-4baa-acc0-e08a25437c90"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.243846 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4c017a9-c049-4baa-acc0-e08a25437c90-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a4c017a9-c049-4baa-acc0-e08a25437c90" (UID: "a4c017a9-c049-4baa-acc0-e08a25437c90"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.253260 4810 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.253291 4810 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.253304 4810 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4c017a9-c049-4baa-acc0-e08a25437c90-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.253318 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4c017a9-c049-4baa-acc0-e08a25437c90-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.253349 4810 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a4c017a9-c049-4baa-acc0-e08a25437c90-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.253363 4810 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a4c017a9-c049-4baa-acc0-e08a25437c90-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.253407 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.253422 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv5x5\" (UniqueName: \"kubernetes.io/projected/a4c017a9-c049-4baa-acc0-e08a25437c90-kube-api-access-tv5x5\") on node \"crc\" DevicePath \"\"" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.253434 4810 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4c017a9-c049-4baa-acc0-e08a25437c90-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.282789 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.354239 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.726944 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a4c017a9-c049-4baa-acc0-e08a25437c90","Type":"ContainerDied","Data":"bc53d602c1ff8998f515238b989c9bf62ab660f688576459213d5fed2a632fb2"} Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.727009 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc53d602c1ff8998f515238b989c9bf62ab660f688576459213d5fed2a632fb2" Feb 19 16:41:04 crc kubenswrapper[4810]: I0219 16:41:04.727011 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.695134 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 16:41:16 crc kubenswrapper[4810]: E0219 16:41:16.697073 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d6df67-4f89-42b8-9e19-2812ece77996" containerName="extract-utilities" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.697114 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d6df67-4f89-42b8-9e19-2812ece77996" containerName="extract-utilities" Feb 19 16:41:16 crc kubenswrapper[4810]: E0219 16:41:16.697136 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7403ee1-3e5c-4f08-a3ab-4eca75254405" containerName="registry-server" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.697146 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7403ee1-3e5c-4f08-a3ab-4eca75254405" containerName="registry-server" Feb 19 16:41:16 crc kubenswrapper[4810]: E0219 16:41:16.697161 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c017a9-c049-4baa-acc0-e08a25437c90" containerName="tempest-tests-tempest-tests-runner" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.697169 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c017a9-c049-4baa-acc0-e08a25437c90" containerName="tempest-tests-tempest-tests-runner" Feb 19 16:41:16 crc kubenswrapper[4810]: E0219 16:41:16.697187 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7403ee1-3e5c-4f08-a3ab-4eca75254405" containerName="extract-utilities" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.697195 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7403ee1-3e5c-4f08-a3ab-4eca75254405" containerName="extract-utilities" Feb 19 16:41:16 crc kubenswrapper[4810]: E0219 16:41:16.697216 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7403ee1-3e5c-4f08-a3ab-4eca75254405" containerName="extract-content" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.697223 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7403ee1-3e5c-4f08-a3ab-4eca75254405" containerName="extract-content" Feb 19 16:41:16 crc kubenswrapper[4810]: E0219 16:41:16.697237 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d6df67-4f89-42b8-9e19-2812ece77996" containerName="registry-server" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.697245 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d6df67-4f89-42b8-9e19-2812ece77996" containerName="registry-server" Feb 19 16:41:16 crc kubenswrapper[4810]: E0219 16:41:16.697281 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d6df67-4f89-42b8-9e19-2812ece77996" containerName="extract-content" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.697290 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d6df67-4f89-42b8-9e19-2812ece77996" containerName="extract-content" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.697523 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7403ee1-3e5c-4f08-a3ab-4eca75254405" containerName="registry-server" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.697539 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c017a9-c049-4baa-acc0-e08a25437c90" containerName="tempest-tests-tempest-tests-runner" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.697554 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d6df67-4f89-42b8-9e19-2812ece77996" containerName="registry-server" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.698402 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.700802 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hvstp" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.725423 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.834264 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdwwl\" (UniqueName: \"kubernetes.io/projected/d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c-kube-api-access-qdwwl\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.834596 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.937097 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdwwl\" (UniqueName: \"kubernetes.io/projected/d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c-kube-api-access-qdwwl\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.937171 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.937801 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.961034 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdwwl\" (UniqueName: \"kubernetes.io/projected/d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c-kube-api-access-qdwwl\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 16:41:16 crc kubenswrapper[4810]: I0219 16:41:16.994873 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 16:41:17 crc kubenswrapper[4810]: I0219 16:41:17.027183 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 16:41:17 crc kubenswrapper[4810]: I0219 16:41:17.351117 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 16:41:17 crc kubenswrapper[4810]: I0219 16:41:17.876145 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c","Type":"ContainerStarted","Data":"faa4349e74e3218974b49ca21921c71165d9ca53eb17561d45afb4794f92a025"} Feb 19 16:41:18 crc kubenswrapper[4810]: I0219 16:41:18.890259 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c","Type":"ContainerStarted","Data":"7ba0abe443e2df1851b2a666811793ff90dd394a1196d027990cb52af7854375"} Feb 19 16:41:18 crc kubenswrapper[4810]: I0219 16:41:18.911963 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.952307276 podStartE2EDuration="2.911943688s" podCreationTimestamp="2026-02-19 16:41:16 +0000 UTC" firstStartedPulling="2026-02-19 16:41:17.352773059 +0000 UTC m=+5506.834803223" lastFinishedPulling="2026-02-19 16:41:18.312409501 +0000 UTC m=+5507.794439635" observedRunningTime="2026-02-19 16:41:18.907387558 +0000 UTC m=+5508.389417692" watchObservedRunningTime="2026-02-19 16:41:18.911943688 +0000 UTC m=+5508.393973822" Feb 19 16:41:19 crc kubenswrapper[4810]: I0219 16:41:19.537468 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:41:19 crc kubenswrapper[4810]: I0219 16:41:19.537531 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:41:43 crc kubenswrapper[4810]: I0219 16:41:43.847760 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mt7ln/must-gather-xtkm7"] Feb 19 16:41:43 crc kubenswrapper[4810]: I0219 16:41:43.849829 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/must-gather-xtkm7" Feb 19 16:41:43 crc kubenswrapper[4810]: I0219 16:41:43.852347 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mt7ln"/"default-dockercfg-6x4d7" Feb 19 16:41:43 crc kubenswrapper[4810]: I0219 16:41:43.852689 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mt7ln"/"kube-root-ca.crt" Feb 19 16:41:43 crc kubenswrapper[4810]: I0219 16:41:43.855315 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mt7ln"/"openshift-service-ca.crt" Feb 19 16:41:43 crc kubenswrapper[4810]: I0219 16:41:43.857934 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mt7ln/must-gather-xtkm7"] Feb 19 16:41:43 crc kubenswrapper[4810]: I0219 16:41:43.962492 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67dql\" (UniqueName: \"kubernetes.io/projected/2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994-kube-api-access-67dql\") pod \"must-gather-xtkm7\" (UID: \"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994\") " pod="openshift-must-gather-mt7ln/must-gather-xtkm7" Feb 19 16:41:43 crc kubenswrapper[4810]: I0219 16:41:43.962839 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994-must-gather-output\") pod \"must-gather-xtkm7\" (UID: \"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994\") " pod="openshift-must-gather-mt7ln/must-gather-xtkm7" Feb 19 16:41:44 crc kubenswrapper[4810]: I0219 16:41:44.065293 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67dql\" (UniqueName: \"kubernetes.io/projected/2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994-kube-api-access-67dql\") pod \"must-gather-xtkm7\" (UID: \"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994\") " pod="openshift-must-gather-mt7ln/must-gather-xtkm7" Feb 19 16:41:44 crc kubenswrapper[4810]: I0219 16:41:44.065724 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994-must-gather-output\") pod \"must-gather-xtkm7\" (UID: \"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994\") " pod="openshift-must-gather-mt7ln/must-gather-xtkm7" Feb 19 16:41:44 crc kubenswrapper[4810]: I0219 16:41:44.066094 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994-must-gather-output\") pod \"must-gather-xtkm7\" (UID: \"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994\") " pod="openshift-must-gather-mt7ln/must-gather-xtkm7" Feb 19 16:41:44 crc kubenswrapper[4810]: I0219 16:41:44.586201 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67dql\" (UniqueName: \"kubernetes.io/projected/2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994-kube-api-access-67dql\") pod \"must-gather-xtkm7\" (UID: \"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994\") " pod="openshift-must-gather-mt7ln/must-gather-xtkm7" Feb 19 16:41:44 crc kubenswrapper[4810]: I0219 16:41:44.771386 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/must-gather-xtkm7" Feb 19 16:41:45 crc kubenswrapper[4810]: W0219 16:41:45.271560 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d8b3bd1_400c_4da2_a6d6_efa7d8bbd994.slice/crio-151c67b89f6c96a03e2c74c97afaae12904a2f90d765b36355e67b20a49b85e4 WatchSource:0}: Error finding container 151c67b89f6c96a03e2c74c97afaae12904a2f90d765b36355e67b20a49b85e4: Status 404 returned error can't find the container with id 151c67b89f6c96a03e2c74c97afaae12904a2f90d765b36355e67b20a49b85e4 Feb 19 16:41:45 crc kubenswrapper[4810]: I0219 16:41:45.277202 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mt7ln/must-gather-xtkm7"] Feb 19 16:41:46 crc kubenswrapper[4810]: I0219 16:41:46.225951 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mt7ln/must-gather-xtkm7" event={"ID":"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994","Type":"ContainerStarted","Data":"151c67b89f6c96a03e2c74c97afaae12904a2f90d765b36355e67b20a49b85e4"} Feb 19 16:41:49 crc kubenswrapper[4810]: I0219 16:41:49.537267 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:41:49 crc kubenswrapper[4810]: I0219 16:41:49.537724 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:41:52 crc kubenswrapper[4810]: I0219 16:41:52.308707 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mt7ln/must-gather-xtkm7" event={"ID":"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994","Type":"ContainerStarted","Data":"61e6a470548c19ba5e113ede1e2ed5b363e40d072f7a28ac5b76e32667d06ba8"} Feb 19 16:41:52 crc kubenswrapper[4810]: I0219 16:41:52.309591 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mt7ln/must-gather-xtkm7" event={"ID":"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994","Type":"ContainerStarted","Data":"17449341bfb55cd42e3f6347fa4879ccfdfca14899e053d20de9d0995b28213d"} Feb 19 16:41:52 crc kubenswrapper[4810]: I0219 16:41:52.350048 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mt7ln/must-gather-xtkm7" podStartSLOduration=3.163753086 podStartE2EDuration="9.35002478s" podCreationTimestamp="2026-02-19 16:41:43 +0000 UTC" firstStartedPulling="2026-02-19 16:41:45.273974716 +0000 UTC m=+5534.756004840" lastFinishedPulling="2026-02-19 16:41:51.46024641 +0000 UTC m=+5540.942276534" observedRunningTime="2026-02-19 16:41:52.341909975 +0000 UTC m=+5541.823940109" watchObservedRunningTime="2026-02-19 16:41:52.35002478 +0000 UTC m=+5541.832054904" Feb 19 16:41:57 crc kubenswrapper[4810]: I0219 16:41:57.133712 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mt7ln/crc-debug-5z5rw"] Feb 19 16:41:57 crc kubenswrapper[4810]: I0219 16:41:57.136440 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" Feb 19 16:41:57 crc kubenswrapper[4810]: I0219 16:41:57.298089 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80cd5f56-cd22-4f44-a387-03ac5132a14d-host\") pod \"crc-debug-5z5rw\" (UID: \"80cd5f56-cd22-4f44-a387-03ac5132a14d\") " pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" Feb 19 16:41:57 crc kubenswrapper[4810]: I0219 16:41:57.298357 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwzqj\" (UniqueName: \"kubernetes.io/projected/80cd5f56-cd22-4f44-a387-03ac5132a14d-kube-api-access-xwzqj\") pod \"crc-debug-5z5rw\" (UID: \"80cd5f56-cd22-4f44-a387-03ac5132a14d\") " pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" Feb 19 16:41:57 crc kubenswrapper[4810]: I0219 16:41:57.400840 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwzqj\" (UniqueName: \"kubernetes.io/projected/80cd5f56-cd22-4f44-a387-03ac5132a14d-kube-api-access-xwzqj\") pod \"crc-debug-5z5rw\" (UID: \"80cd5f56-cd22-4f44-a387-03ac5132a14d\") " pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" Feb 19 16:41:57 crc kubenswrapper[4810]: I0219 16:41:57.401384 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80cd5f56-cd22-4f44-a387-03ac5132a14d-host\") pod \"crc-debug-5z5rw\" (UID: \"80cd5f56-cd22-4f44-a387-03ac5132a14d\") " pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" Feb 19 16:41:57 crc kubenswrapper[4810]: I0219 16:41:57.401499 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80cd5f56-cd22-4f44-a387-03ac5132a14d-host\") pod \"crc-debug-5z5rw\" (UID: \"80cd5f56-cd22-4f44-a387-03ac5132a14d\") " pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" Feb 19 16:41:57 crc kubenswrapper[4810]: I0219 16:41:57.421721 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwzqj\" (UniqueName: \"kubernetes.io/projected/80cd5f56-cd22-4f44-a387-03ac5132a14d-kube-api-access-xwzqj\") pod \"crc-debug-5z5rw\" (UID: \"80cd5f56-cd22-4f44-a387-03ac5132a14d\") " pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" Feb 19 16:41:57 crc kubenswrapper[4810]: I0219 16:41:57.460531 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" Feb 19 16:41:58 crc kubenswrapper[4810]: I0219 16:41:58.375379 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" event={"ID":"80cd5f56-cd22-4f44-a387-03ac5132a14d","Type":"ContainerStarted","Data":"a39231f8e7be04f14e3096847bc871d04550c4e47e8f07c263cb25ad498189d6"} Feb 19 16:42:08 crc kubenswrapper[4810]: I0219 16:42:08.465664 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" event={"ID":"80cd5f56-cd22-4f44-a387-03ac5132a14d","Type":"ContainerStarted","Data":"a07c763af809aeaa0f606507ff13030276f05d8d690e2a7ca0fe044711b225e6"} Feb 19 16:42:08 crc kubenswrapper[4810]: I0219 16:42:08.483958 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" podStartSLOduration=1.6297250920000002 podStartE2EDuration="11.483941087s" podCreationTimestamp="2026-02-19 16:41:57 +0000 UTC" firstStartedPulling="2026-02-19 16:41:57.513288956 +0000 UTC m=+5546.995319080" lastFinishedPulling="2026-02-19 16:42:07.367504941 +0000 UTC m=+5556.849535075" observedRunningTime="2026-02-19 16:42:08.480492424 +0000 UTC m=+5557.962522558" watchObservedRunningTime="2026-02-19 16:42:08.483941087 +0000 UTC m=+5557.965971221" Feb 19 16:42:09 crc kubenswrapper[4810]: I0219 16:42:09.900072 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9lqkv"] Feb 19 16:42:09 crc kubenswrapper[4810]: I0219 16:42:09.902460 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:09 crc kubenswrapper[4810]: I0219 16:42:09.918651 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lqkv"] Feb 19 16:42:09 crc kubenswrapper[4810]: I0219 16:42:09.998969 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-catalog-content\") pod \"redhat-marketplace-9lqkv\" (UID: \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\") " pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:09 crc kubenswrapper[4810]: I0219 16:42:09.999050 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4k24\" (UniqueName: \"kubernetes.io/projected/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-kube-api-access-n4k24\") pod \"redhat-marketplace-9lqkv\" (UID: \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\") " pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:09 crc kubenswrapper[4810]: I0219 16:42:09.999120 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-utilities\") pod \"redhat-marketplace-9lqkv\" (UID: \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\") " pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:10 crc kubenswrapper[4810]: I0219 16:42:10.100999 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-catalog-content\") pod \"redhat-marketplace-9lqkv\" (UID: \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\") " pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:10 crc kubenswrapper[4810]: I0219 16:42:10.101605 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-catalog-content\") pod \"redhat-marketplace-9lqkv\" (UID: \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\") " pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:10 crc kubenswrapper[4810]: I0219 16:42:10.101814 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4k24\" (UniqueName: \"kubernetes.io/projected/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-kube-api-access-n4k24\") pod \"redhat-marketplace-9lqkv\" (UID: \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\") " pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:10 crc kubenswrapper[4810]: I0219 16:42:10.102220 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-utilities\") pod \"redhat-marketplace-9lqkv\" (UID: \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\") " pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:10 crc kubenswrapper[4810]: I0219 16:42:10.102601 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-utilities\") pod \"redhat-marketplace-9lqkv\" (UID: \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\") " pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:10 crc kubenswrapper[4810]: I0219 16:42:10.123978 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4k24\" (UniqueName: \"kubernetes.io/projected/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-kube-api-access-n4k24\") pod \"redhat-marketplace-9lqkv\" (UID: \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\") " pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:10 crc kubenswrapper[4810]: I0219 16:42:10.218881 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:10 crc kubenswrapper[4810]: I0219 16:42:10.714745 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lqkv"] Feb 19 16:42:11 crc kubenswrapper[4810]: I0219 16:42:11.521084 4810 generic.go:334] "Generic (PLEG): container finished" podID="db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" containerID="7f5494bd61ffab00fb7a4776204e698af55418ab9763f28dfbefa06a73f560d9" exitCode=0 Feb 19 16:42:11 crc kubenswrapper[4810]: I0219 16:42:11.521338 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lqkv" event={"ID":"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb","Type":"ContainerDied","Data":"7f5494bd61ffab00fb7a4776204e698af55418ab9763f28dfbefa06a73f560d9"} Feb 19 16:42:11 crc kubenswrapper[4810]: I0219 16:42:11.521369 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lqkv" event={"ID":"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb","Type":"ContainerStarted","Data":"f3e83be118c02d536c510ef889a7296ac60084c7ad5f21a05fb5606bf16edf72"} Feb 19 16:42:14 crc kubenswrapper[4810]: I0219 16:42:14.556700 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lqkv" event={"ID":"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb","Type":"ContainerStarted","Data":"a4fcdca7c10e1120d38e6f09a02741be0d99fe0d553a926c6932f130b77917ef"} Feb 19 16:42:15 crc kubenswrapper[4810]: I0219 16:42:15.567803 4810 generic.go:334] "Generic (PLEG): container finished" podID="db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" containerID="a4fcdca7c10e1120d38e6f09a02741be0d99fe0d553a926c6932f130b77917ef" exitCode=0 Feb 19 16:42:15 crc kubenswrapper[4810]: I0219 16:42:15.567864 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lqkv" event={"ID":"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb","Type":"ContainerDied","Data":"a4fcdca7c10e1120d38e6f09a02741be0d99fe0d553a926c6932f130b77917ef"} Feb 19 16:42:16 crc kubenswrapper[4810]: I0219 16:42:16.583161 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lqkv" event={"ID":"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb","Type":"ContainerStarted","Data":"9a5cc97dac182042a875d1ceb6bf4e9c3ca19f53752e02d2e0116925aa456620"} Feb 19 16:42:16 crc kubenswrapper[4810]: I0219 16:42:16.610016 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9lqkv" podStartSLOduration=4.772299823 podStartE2EDuration="7.609995897s" podCreationTimestamp="2026-02-19 16:42:09 +0000 UTC" firstStartedPulling="2026-02-19 16:42:13.152859477 +0000 UTC m=+5562.634889601" lastFinishedPulling="2026-02-19 16:42:15.990555551 +0000 UTC m=+5565.472585675" observedRunningTime="2026-02-19 16:42:16.604427983 +0000 UTC m=+5566.086458117" watchObservedRunningTime="2026-02-19 16:42:16.609995897 +0000 UTC m=+5566.092026031" Feb 19 16:42:19 crc kubenswrapper[4810]: I0219 16:42:19.537223 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:42:19 crc kubenswrapper[4810]: I0219 16:42:19.538017 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:42:19 crc kubenswrapper[4810]: I0219 16:42:19.538086 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 16:42:19 crc kubenswrapper[4810]: I0219 16:42:19.539195 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8edda1f9342dacaf59cc4da7ca5a5e6fa7b1be00b10f0e3774c25fffdb24625"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 16:42:19 crc kubenswrapper[4810]: I0219 16:42:19.539283 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://b8edda1f9342dacaf59cc4da7ca5a5e6fa7b1be00b10f0e3774c25fffdb24625" gracePeriod=600 Feb 19 16:42:20 crc kubenswrapper[4810]: I0219 16:42:20.219396 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:20 crc kubenswrapper[4810]: I0219 16:42:20.219729 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:20 crc kubenswrapper[4810]: I0219 16:42:20.270000 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:20 crc kubenswrapper[4810]: I0219 16:42:20.621692 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="b8edda1f9342dacaf59cc4da7ca5a5e6fa7b1be00b10f0e3774c25fffdb24625" exitCode=0 Feb 19 16:42:20 crc kubenswrapper[4810]: I0219 16:42:20.621736 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"b8edda1f9342dacaf59cc4da7ca5a5e6fa7b1be00b10f0e3774c25fffdb24625"} Feb 19 16:42:20 crc kubenswrapper[4810]: I0219 16:42:20.622014 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e"} Feb 19 16:42:20 crc kubenswrapper[4810]: I0219 16:42:20.622055 4810 scope.go:117] "RemoveContainer" containerID="c54befe76a9fc3a2a84e7b2c94c4037774045172c204f2e9cb88741a51f735a3" Feb 19 16:42:30 crc kubenswrapper[4810]: I0219 16:42:30.277993 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:30 crc kubenswrapper[4810]: I0219 16:42:30.356163 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lqkv"] Feb 19 16:42:30 crc kubenswrapper[4810]: I0219 16:42:30.747651 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9lqkv" podUID="db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" containerName="registry-server" containerID="cri-o://9a5cc97dac182042a875d1ceb6bf4e9c3ca19f53752e02d2e0116925aa456620" gracePeriod=2 Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.313966 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.448399 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-catalog-content\") pod \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\" (UID: \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\") " Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.448490 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-utilities\") pod \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\" (UID: \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\") " Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.448647 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4k24\" (UniqueName: \"kubernetes.io/projected/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-kube-api-access-n4k24\") pod \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\" (UID: \"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb\") " Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.449544 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-utilities" (OuterVolumeSpecName: "utilities") pod "db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" (UID: "db10b4b7-0ea5-46ad-ad3f-8618c5c820bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.462140 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-kube-api-access-n4k24" (OuterVolumeSpecName: "kube-api-access-n4k24") pod "db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" (UID: "db10b4b7-0ea5-46ad-ad3f-8618c5c820bb"). InnerVolumeSpecName "kube-api-access-n4k24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.478129 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" (UID: "db10b4b7-0ea5-46ad-ad3f-8618c5c820bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.551692 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.551727 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.551738 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4k24\" (UniqueName: \"kubernetes.io/projected/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb-kube-api-access-n4k24\") on node \"crc\" DevicePath \"\"" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.757666 4810 generic.go:334] "Generic (PLEG): container finished" podID="db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" containerID="9a5cc97dac182042a875d1ceb6bf4e9c3ca19f53752e02d2e0116925aa456620" exitCode=0 Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.757707 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lqkv" event={"ID":"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb","Type":"ContainerDied","Data":"9a5cc97dac182042a875d1ceb6bf4e9c3ca19f53752e02d2e0116925aa456620"} Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.757747 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lqkv" event={"ID":"db10b4b7-0ea5-46ad-ad3f-8618c5c820bb","Type":"ContainerDied","Data":"f3e83be118c02d536c510ef889a7296ac60084c7ad5f21a05fb5606bf16edf72"} Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.757765 4810 scope.go:117] "RemoveContainer" containerID="9a5cc97dac182042a875d1ceb6bf4e9c3ca19f53752e02d2e0116925aa456620" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.757723 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lqkv" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.779883 4810 scope.go:117] "RemoveContainer" containerID="a4fcdca7c10e1120d38e6f09a02741be0d99fe0d553a926c6932f130b77917ef" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.799084 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lqkv"] Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.810150 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lqkv"] Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.810496 4810 scope.go:117] "RemoveContainer" containerID="7f5494bd61ffab00fb7a4776204e698af55418ab9763f28dfbefa06a73f560d9" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.861877 4810 scope.go:117] "RemoveContainer" containerID="9a5cc97dac182042a875d1ceb6bf4e9c3ca19f53752e02d2e0116925aa456620" Feb 19 16:42:31 crc kubenswrapper[4810]: E0219 16:42:31.862702 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a5cc97dac182042a875d1ceb6bf4e9c3ca19f53752e02d2e0116925aa456620\": container with ID starting with 9a5cc97dac182042a875d1ceb6bf4e9c3ca19f53752e02d2e0116925aa456620 not found: ID does not exist" containerID="9a5cc97dac182042a875d1ceb6bf4e9c3ca19f53752e02d2e0116925aa456620" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.862744 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5cc97dac182042a875d1ceb6bf4e9c3ca19f53752e02d2e0116925aa456620"} err="failed to get container status \"9a5cc97dac182042a875d1ceb6bf4e9c3ca19f53752e02d2e0116925aa456620\": rpc error: code = NotFound desc = could not find container \"9a5cc97dac182042a875d1ceb6bf4e9c3ca19f53752e02d2e0116925aa456620\": container with ID starting with 9a5cc97dac182042a875d1ceb6bf4e9c3ca19f53752e02d2e0116925aa456620 not found: ID does not exist" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.862771 4810 scope.go:117] "RemoveContainer" containerID="a4fcdca7c10e1120d38e6f09a02741be0d99fe0d553a926c6932f130b77917ef" Feb 19 16:42:31 crc kubenswrapper[4810]: E0219 16:42:31.863107 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4fcdca7c10e1120d38e6f09a02741be0d99fe0d553a926c6932f130b77917ef\": container with ID starting with a4fcdca7c10e1120d38e6f09a02741be0d99fe0d553a926c6932f130b77917ef not found: ID does not exist" containerID="a4fcdca7c10e1120d38e6f09a02741be0d99fe0d553a926c6932f130b77917ef" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.863147 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4fcdca7c10e1120d38e6f09a02741be0d99fe0d553a926c6932f130b77917ef"} err="failed to get container status \"a4fcdca7c10e1120d38e6f09a02741be0d99fe0d553a926c6932f130b77917ef\": rpc error: code = NotFound desc = could not find container \"a4fcdca7c10e1120d38e6f09a02741be0d99fe0d553a926c6932f130b77917ef\": container with ID starting with a4fcdca7c10e1120d38e6f09a02741be0d99fe0d553a926c6932f130b77917ef not found: ID does not exist" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.863178 4810 scope.go:117] "RemoveContainer" containerID="7f5494bd61ffab00fb7a4776204e698af55418ab9763f28dfbefa06a73f560d9" Feb 19 16:42:31 crc kubenswrapper[4810]: E0219 16:42:31.863718 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f5494bd61ffab00fb7a4776204e698af55418ab9763f28dfbefa06a73f560d9\": container with ID starting with 7f5494bd61ffab00fb7a4776204e698af55418ab9763f28dfbefa06a73f560d9 not found: ID does not exist" containerID="7f5494bd61ffab00fb7a4776204e698af55418ab9763f28dfbefa06a73f560d9" Feb 19 16:42:31 crc kubenswrapper[4810]: I0219 16:42:31.863754 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5494bd61ffab00fb7a4776204e698af55418ab9763f28dfbefa06a73f560d9"} err="failed to get container status \"7f5494bd61ffab00fb7a4776204e698af55418ab9763f28dfbefa06a73f560d9\": rpc error: code = NotFound desc = could not find container \"7f5494bd61ffab00fb7a4776204e698af55418ab9763f28dfbefa06a73f560d9\": container with ID starting with 7f5494bd61ffab00fb7a4776204e698af55418ab9763f28dfbefa06a73f560d9 not found: ID does not exist" Feb 19 16:42:33 crc kubenswrapper[4810]: I0219 16:42:33.456485 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" path="/var/lib/kubelet/pods/db10b4b7-0ea5-46ad-ad3f-8618c5c820bb/volumes" Feb 19 16:42:51 crc kubenswrapper[4810]: I0219 16:42:51.970600 4810 generic.go:334] "Generic (PLEG): container finished" podID="80cd5f56-cd22-4f44-a387-03ac5132a14d" containerID="a07c763af809aeaa0f606507ff13030276f05d8d690e2a7ca0fe044711b225e6" exitCode=0 Feb 19 16:42:51 crc kubenswrapper[4810]: I0219 16:42:51.970698 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" event={"ID":"80cd5f56-cd22-4f44-a387-03ac5132a14d","Type":"ContainerDied","Data":"a07c763af809aeaa0f606507ff13030276f05d8d690e2a7ca0fe044711b225e6"} Feb 19 16:42:53 crc kubenswrapper[4810]: I0219 16:42:53.099257 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" Feb 19 16:42:53 crc kubenswrapper[4810]: I0219 16:42:53.138187 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mt7ln/crc-debug-5z5rw"] Feb 19 16:42:53 crc kubenswrapper[4810]: I0219 16:42:53.147191 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mt7ln/crc-debug-5z5rw"] Feb 19 16:42:53 crc kubenswrapper[4810]: I0219 16:42:53.228048 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80cd5f56-cd22-4f44-a387-03ac5132a14d-host\") pod \"80cd5f56-cd22-4f44-a387-03ac5132a14d\" (UID: \"80cd5f56-cd22-4f44-a387-03ac5132a14d\") " Feb 19 16:42:53 crc kubenswrapper[4810]: I0219 16:42:53.228108 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwzqj\" (UniqueName: \"kubernetes.io/projected/80cd5f56-cd22-4f44-a387-03ac5132a14d-kube-api-access-xwzqj\") pod \"80cd5f56-cd22-4f44-a387-03ac5132a14d\" (UID: \"80cd5f56-cd22-4f44-a387-03ac5132a14d\") " Feb 19 16:42:53 crc kubenswrapper[4810]: I0219 16:42:53.228213 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80cd5f56-cd22-4f44-a387-03ac5132a14d-host" (OuterVolumeSpecName: "host") pod "80cd5f56-cd22-4f44-a387-03ac5132a14d" (UID: "80cd5f56-cd22-4f44-a387-03ac5132a14d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 16:42:53 crc kubenswrapper[4810]: I0219 16:42:53.228946 4810 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80cd5f56-cd22-4f44-a387-03ac5132a14d-host\") on node \"crc\" DevicePath \"\"" Feb 19 16:42:53 crc kubenswrapper[4810]: I0219 16:42:53.239527 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cd5f56-cd22-4f44-a387-03ac5132a14d-kube-api-access-xwzqj" (OuterVolumeSpecName: "kube-api-access-xwzqj") pod "80cd5f56-cd22-4f44-a387-03ac5132a14d" (UID: "80cd5f56-cd22-4f44-a387-03ac5132a14d"). InnerVolumeSpecName "kube-api-access-xwzqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:42:53 crc kubenswrapper[4810]: I0219 16:42:53.330987 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwzqj\" (UniqueName: \"kubernetes.io/projected/80cd5f56-cd22-4f44-a387-03ac5132a14d-kube-api-access-xwzqj\") on node \"crc\" DevicePath \"\"" Feb 19 16:42:53 crc kubenswrapper[4810]: I0219 16:42:53.454523 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cd5f56-cd22-4f44-a387-03ac5132a14d" path="/var/lib/kubelet/pods/80cd5f56-cd22-4f44-a387-03ac5132a14d/volumes" Feb 19 16:42:53 crc kubenswrapper[4810]: I0219 16:42:53.992826 4810 scope.go:117] "RemoveContainer" containerID="a07c763af809aeaa0f606507ff13030276f05d8d690e2a7ca0fe044711b225e6" Feb 19 16:42:53 crc kubenswrapper[4810]: I0219 16:42:53.992879 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/crc-debug-5z5rw" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.370023 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mt7ln/crc-debug-q6p74"] Feb 19 16:42:54 crc kubenswrapper[4810]: E0219 16:42:54.370925 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" containerName="registry-server" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.370948 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" containerName="registry-server" Feb 19 16:42:54 crc kubenswrapper[4810]: E0219 16:42:54.370980 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" containerName="extract-utilities" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.370993 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" containerName="extract-utilities" Feb 19 16:42:54 crc kubenswrapper[4810]: E0219 16:42:54.371013 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cd5f56-cd22-4f44-a387-03ac5132a14d" containerName="container-00" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.371063 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cd5f56-cd22-4f44-a387-03ac5132a14d" containerName="container-00" Feb 19 16:42:54 crc kubenswrapper[4810]: E0219 16:42:54.371088 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" containerName="extract-content" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.371097 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" containerName="extract-content" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.371427 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cd5f56-cd22-4f44-a387-03ac5132a14d" containerName="container-00" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.371460 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="db10b4b7-0ea5-46ad-ad3f-8618c5c820bb" containerName="registry-server" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.372493 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/crc-debug-q6p74" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.560866 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z28r\" (UniqueName: \"kubernetes.io/projected/0f51ddeb-2d6c-440a-9629-fd9894243a23-kube-api-access-6z28r\") pod \"crc-debug-q6p74\" (UID: \"0f51ddeb-2d6c-440a-9629-fd9894243a23\") " pod="openshift-must-gather-mt7ln/crc-debug-q6p74" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.561632 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f51ddeb-2d6c-440a-9629-fd9894243a23-host\") pod \"crc-debug-q6p74\" (UID: \"0f51ddeb-2d6c-440a-9629-fd9894243a23\") " pod="openshift-must-gather-mt7ln/crc-debug-q6p74" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.663444 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f51ddeb-2d6c-440a-9629-fd9894243a23-host\") pod \"crc-debug-q6p74\" (UID: \"0f51ddeb-2d6c-440a-9629-fd9894243a23\") " pod="openshift-must-gather-mt7ln/crc-debug-q6p74" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.663659 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z28r\" (UniqueName: \"kubernetes.io/projected/0f51ddeb-2d6c-440a-9629-fd9894243a23-kube-api-access-6z28r\") pod \"crc-debug-q6p74\" (UID: \"0f51ddeb-2d6c-440a-9629-fd9894243a23\") " pod="openshift-must-gather-mt7ln/crc-debug-q6p74" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.664113 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f51ddeb-2d6c-440a-9629-fd9894243a23-host\") pod \"crc-debug-q6p74\" (UID: \"0f51ddeb-2d6c-440a-9629-fd9894243a23\") " pod="openshift-must-gather-mt7ln/crc-debug-q6p74" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.682856 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z28r\" (UniqueName: \"kubernetes.io/projected/0f51ddeb-2d6c-440a-9629-fd9894243a23-kube-api-access-6z28r\") pod \"crc-debug-q6p74\" (UID: \"0f51ddeb-2d6c-440a-9629-fd9894243a23\") " pod="openshift-must-gather-mt7ln/crc-debug-q6p74" Feb 19 16:42:54 crc kubenswrapper[4810]: I0219 16:42:54.692672 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/crc-debug-q6p74" Feb 19 16:42:55 crc kubenswrapper[4810]: I0219 16:42:55.007792 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mt7ln/crc-debug-q6p74" event={"ID":"0f51ddeb-2d6c-440a-9629-fd9894243a23","Type":"ContainerStarted","Data":"8eab70b763bfc660b31c0a60ab6d66eddd4b83e4bf6e5028c9b6b68f68641100"} Feb 19 16:42:55 crc kubenswrapper[4810]: I0219 16:42:55.007857 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mt7ln/crc-debug-q6p74" event={"ID":"0f51ddeb-2d6c-440a-9629-fd9894243a23","Type":"ContainerStarted","Data":"4496e95ad24f39e6c752124e7013fd290a20930dfd3410e4845de35db32a4353"} Feb 19 16:42:55 crc kubenswrapper[4810]: I0219 16:42:55.024767 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mt7ln/crc-debug-q6p74" podStartSLOduration=1.024754975 podStartE2EDuration="1.024754975s" podCreationTimestamp="2026-02-19 16:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 16:42:55.017688105 +0000 UTC m=+5604.499718229" watchObservedRunningTime="2026-02-19 16:42:55.024754975 +0000 UTC m=+5604.506785099" Feb 19 16:42:56 crc kubenswrapper[4810]: I0219 16:42:56.017004 4810 generic.go:334] "Generic (PLEG): container finished" podID="0f51ddeb-2d6c-440a-9629-fd9894243a23" containerID="8eab70b763bfc660b31c0a60ab6d66eddd4b83e4bf6e5028c9b6b68f68641100" exitCode=0 Feb 19 16:42:56 crc kubenswrapper[4810]: I0219 16:42:56.017182 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mt7ln/crc-debug-q6p74" event={"ID":"0f51ddeb-2d6c-440a-9629-fd9894243a23","Type":"ContainerDied","Data":"8eab70b763bfc660b31c0a60ab6d66eddd4b83e4bf6e5028c9b6b68f68641100"} Feb 19 16:42:57 crc kubenswrapper[4810]: I0219 16:42:57.623281 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/crc-debug-q6p74" Feb 19 16:42:57 crc kubenswrapper[4810]: I0219 16:42:57.657158 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mt7ln/crc-debug-q6p74"] Feb 19 16:42:57 crc kubenswrapper[4810]: I0219 16:42:57.667850 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mt7ln/crc-debug-q6p74"] Feb 19 16:42:57 crc kubenswrapper[4810]: I0219 16:42:57.721723 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f51ddeb-2d6c-440a-9629-fd9894243a23-host\") pod \"0f51ddeb-2d6c-440a-9629-fd9894243a23\" (UID: \"0f51ddeb-2d6c-440a-9629-fd9894243a23\") " Feb 19 16:42:57 crc kubenswrapper[4810]: I0219 16:42:57.721861 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f51ddeb-2d6c-440a-9629-fd9894243a23-host" (OuterVolumeSpecName: "host") pod "0f51ddeb-2d6c-440a-9629-fd9894243a23" (UID: "0f51ddeb-2d6c-440a-9629-fd9894243a23"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 16:42:57 crc kubenswrapper[4810]: I0219 16:42:57.721999 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z28r\" (UniqueName: \"kubernetes.io/projected/0f51ddeb-2d6c-440a-9629-fd9894243a23-kube-api-access-6z28r\") pod \"0f51ddeb-2d6c-440a-9629-fd9894243a23\" (UID: \"0f51ddeb-2d6c-440a-9629-fd9894243a23\") " Feb 19 16:42:57 crc kubenswrapper[4810]: I0219 16:42:57.722403 4810 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f51ddeb-2d6c-440a-9629-fd9894243a23-host\") on node \"crc\" DevicePath \"\"" Feb 19 16:42:57 crc kubenswrapper[4810]: I0219 16:42:57.728529 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f51ddeb-2d6c-440a-9629-fd9894243a23-kube-api-access-6z28r" (OuterVolumeSpecName: "kube-api-access-6z28r") pod "0f51ddeb-2d6c-440a-9629-fd9894243a23" (UID: "0f51ddeb-2d6c-440a-9629-fd9894243a23"). InnerVolumeSpecName "kube-api-access-6z28r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:42:57 crc kubenswrapper[4810]: I0219 16:42:57.823735 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z28r\" (UniqueName: \"kubernetes.io/projected/0f51ddeb-2d6c-440a-9629-fd9894243a23-kube-api-access-6z28r\") on node \"crc\" DevicePath \"\"" Feb 19 16:42:58 crc kubenswrapper[4810]: I0219 16:42:58.034225 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4496e95ad24f39e6c752124e7013fd290a20930dfd3410e4845de35db32a4353" Feb 19 16:42:58 crc kubenswrapper[4810]: I0219 16:42:58.034277 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/crc-debug-q6p74" Feb 19 16:42:58 crc kubenswrapper[4810]: I0219 16:42:58.827489 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mt7ln/crc-debug-kd2tv"] Feb 19 16:42:58 crc kubenswrapper[4810]: E0219 16:42:58.828104 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f51ddeb-2d6c-440a-9629-fd9894243a23" containerName="container-00" Feb 19 16:42:58 crc kubenswrapper[4810]: I0219 16:42:58.828115 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f51ddeb-2d6c-440a-9629-fd9894243a23" containerName="container-00" Feb 19 16:42:58 crc kubenswrapper[4810]: I0219 16:42:58.828328 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f51ddeb-2d6c-440a-9629-fd9894243a23" containerName="container-00" Feb 19 16:42:58 crc kubenswrapper[4810]: I0219 16:42:58.829006 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/crc-debug-kd2tv" Feb 19 16:42:58 crc kubenswrapper[4810]: I0219 16:42:58.945230 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89d6f\" (UniqueName: \"kubernetes.io/projected/d35bc1c3-d664-4faa-89db-dbddd7c714f3-kube-api-access-89d6f\") pod \"crc-debug-kd2tv\" (UID: \"d35bc1c3-d664-4faa-89db-dbddd7c714f3\") " pod="openshift-must-gather-mt7ln/crc-debug-kd2tv" Feb 19 16:42:58 crc kubenswrapper[4810]: I0219 16:42:58.945296 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d35bc1c3-d664-4faa-89db-dbddd7c714f3-host\") pod \"crc-debug-kd2tv\" (UID: \"d35bc1c3-d664-4faa-89db-dbddd7c714f3\") " pod="openshift-must-gather-mt7ln/crc-debug-kd2tv" Feb 19 16:42:59 crc kubenswrapper[4810]: I0219 16:42:59.047924 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89d6f\" (UniqueName: \"kubernetes.io/projected/d35bc1c3-d664-4faa-89db-dbddd7c714f3-kube-api-access-89d6f\") pod \"crc-debug-kd2tv\" (UID: \"d35bc1c3-d664-4faa-89db-dbddd7c714f3\") " pod="openshift-must-gather-mt7ln/crc-debug-kd2tv" Feb 19 16:42:59 crc kubenswrapper[4810]: I0219 16:42:59.048055 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d35bc1c3-d664-4faa-89db-dbddd7c714f3-host\") pod \"crc-debug-kd2tv\" (UID: \"d35bc1c3-d664-4faa-89db-dbddd7c714f3\") " pod="openshift-must-gather-mt7ln/crc-debug-kd2tv" Feb 19 16:42:59 crc kubenswrapper[4810]: I0219 16:42:59.048382 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d35bc1c3-d664-4faa-89db-dbddd7c714f3-host\") pod \"crc-debug-kd2tv\" (UID: \"d35bc1c3-d664-4faa-89db-dbddd7c714f3\") " pod="openshift-must-gather-mt7ln/crc-debug-kd2tv" Feb 19 16:42:59 crc kubenswrapper[4810]: I0219 16:42:59.462206 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f51ddeb-2d6c-440a-9629-fd9894243a23" path="/var/lib/kubelet/pods/0f51ddeb-2d6c-440a-9629-fd9894243a23/volumes" Feb 19 16:42:59 crc kubenswrapper[4810]: I0219 16:42:59.485492 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89d6f\" (UniqueName: \"kubernetes.io/projected/d35bc1c3-d664-4faa-89db-dbddd7c714f3-kube-api-access-89d6f\") pod \"crc-debug-kd2tv\" (UID: \"d35bc1c3-d664-4faa-89db-dbddd7c714f3\") " pod="openshift-must-gather-mt7ln/crc-debug-kd2tv" Feb 19 16:42:59 crc kubenswrapper[4810]: I0219 16:42:59.754075 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/crc-debug-kd2tv" Feb 19 16:43:00 crc kubenswrapper[4810]: I0219 16:43:00.055227 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mt7ln/crc-debug-kd2tv" event={"ID":"d35bc1c3-d664-4faa-89db-dbddd7c714f3","Type":"ContainerStarted","Data":"50ef314a010512b59b5e136c75c52a3caf286d85d31254cd01cdef5d379a63e2"} Feb 19 16:43:01 crc kubenswrapper[4810]: I0219 16:43:01.099057 4810 generic.go:334] "Generic (PLEG): container finished" podID="d35bc1c3-d664-4faa-89db-dbddd7c714f3" containerID="8ccf7e7a2c8e880d2be04530d96a194520a2d060cbaad4b63481170b69382a6c" exitCode=0 Feb 19 16:43:01 crc kubenswrapper[4810]: I0219 16:43:01.099122 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mt7ln/crc-debug-kd2tv" event={"ID":"d35bc1c3-d664-4faa-89db-dbddd7c714f3","Type":"ContainerDied","Data":"8ccf7e7a2c8e880d2be04530d96a194520a2d060cbaad4b63481170b69382a6c"} Feb 19 16:43:01 crc kubenswrapper[4810]: I0219 16:43:01.151157 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mt7ln/crc-debug-kd2tv"] Feb 19 16:43:01 crc kubenswrapper[4810]: I0219 16:43:01.165439 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mt7ln/crc-debug-kd2tv"] Feb 19 16:43:02 crc kubenswrapper[4810]: I0219 16:43:02.221958 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/crc-debug-kd2tv" Feb 19 16:43:02 crc kubenswrapper[4810]: I0219 16:43:02.321434 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89d6f\" (UniqueName: \"kubernetes.io/projected/d35bc1c3-d664-4faa-89db-dbddd7c714f3-kube-api-access-89d6f\") pod \"d35bc1c3-d664-4faa-89db-dbddd7c714f3\" (UID: \"d35bc1c3-d664-4faa-89db-dbddd7c714f3\") " Feb 19 16:43:02 crc kubenswrapper[4810]: I0219 16:43:02.321609 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d35bc1c3-d664-4faa-89db-dbddd7c714f3-host\") pod \"d35bc1c3-d664-4faa-89db-dbddd7c714f3\" (UID: \"d35bc1c3-d664-4faa-89db-dbddd7c714f3\") " Feb 19 16:43:02 crc kubenswrapper[4810]: I0219 16:43:02.321672 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d35bc1c3-d664-4faa-89db-dbddd7c714f3-host" (OuterVolumeSpecName: "host") pod "d35bc1c3-d664-4faa-89db-dbddd7c714f3" (UID: "d35bc1c3-d664-4faa-89db-dbddd7c714f3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 16:43:02 crc kubenswrapper[4810]: I0219 16:43:02.322190 4810 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d35bc1c3-d664-4faa-89db-dbddd7c714f3-host\") on node \"crc\" DevicePath \"\"" Feb 19 16:43:02 crc kubenswrapper[4810]: I0219 16:43:02.340003 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35bc1c3-d664-4faa-89db-dbddd7c714f3-kube-api-access-89d6f" (OuterVolumeSpecName: "kube-api-access-89d6f") pod "d35bc1c3-d664-4faa-89db-dbddd7c714f3" (UID: "d35bc1c3-d664-4faa-89db-dbddd7c714f3"). InnerVolumeSpecName "kube-api-access-89d6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:43:02 crc kubenswrapper[4810]: I0219 16:43:02.423656 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89d6f\" (UniqueName: \"kubernetes.io/projected/d35bc1c3-d664-4faa-89db-dbddd7c714f3-kube-api-access-89d6f\") on node \"crc\" DevicePath \"\"" Feb 19 16:43:03 crc kubenswrapper[4810]: I0219 16:43:03.122565 4810 scope.go:117] "RemoveContainer" containerID="8ccf7e7a2c8e880d2be04530d96a194520a2d060cbaad4b63481170b69382a6c" Feb 19 16:43:03 crc kubenswrapper[4810]: I0219 16:43:03.122881 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/crc-debug-kd2tv" Feb 19 16:43:03 crc kubenswrapper[4810]: I0219 16:43:03.457318 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d35bc1c3-d664-4faa-89db-dbddd7c714f3" path="/var/lib/kubelet/pods/d35bc1c3-d664-4faa-89db-dbddd7c714f3/volumes" Feb 19 16:43:44 crc kubenswrapper[4810]: I0219 16:43:44.165600 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6b886df68b-htd57_8d391303-b5ee-4f63-8035-12f123f35e65/barbican-api/0.log" Feb 19 16:43:44 crc kubenswrapper[4810]: I0219 16:43:44.348357 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6b886df68b-htd57_8d391303-b5ee-4f63-8035-12f123f35e65/barbican-api-log/0.log" Feb 19 16:43:44 crc kubenswrapper[4810]: I0219 16:43:44.406299 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75f99f68b4-d7hj4_c008ffcd-bb96-47dd-a311-fdc58f6d8918/barbican-keystone-listener/0.log" Feb 19 16:43:44 crc kubenswrapper[4810]: I0219 16:43:44.485388 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75f99f68b4-d7hj4_c008ffcd-bb96-47dd-a311-fdc58f6d8918/barbican-keystone-listener-log/0.log" Feb 19 16:43:44 crc kubenswrapper[4810]: I0219 16:43:44.616611 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58f8775989-n9rgr_f277c31b-ff97-4f3b-aec3-c5cfe9293d60/barbican-worker/0.log" Feb 19 16:43:44 crc kubenswrapper[4810]: I0219 16:43:44.626209 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58f8775989-n9rgr_f277c31b-ff97-4f3b-aec3-c5cfe9293d60/barbican-worker-log/0.log" Feb 19 16:43:44 crc kubenswrapper[4810]: I0219 16:43:44.789066 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx_c4a9ca21-e1c7-490d-8078-14407b530301/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:44 crc kubenswrapper[4810]: I0219 16:43:44.890027 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e7d91a4d-5b61-404e-a58b-cb426722f883/ceilometer-central-agent/0.log" Feb 19 16:43:44 crc kubenswrapper[4810]: I0219 16:43:44.971969 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e7d91a4d-5b61-404e-a58b-cb426722f883/ceilometer-notification-agent/0.log" Feb 19 16:43:45 crc kubenswrapper[4810]: I0219 16:43:45.016286 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e7d91a4d-5b61-404e-a58b-cb426722f883/proxy-httpd/0.log" Feb 19 16:43:45 crc kubenswrapper[4810]: I0219 16:43:45.074909 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e7d91a4d-5b61-404e-a58b-cb426722f883/sg-core/0.log" Feb 19 16:43:45 crc kubenswrapper[4810]: I0219 16:43:45.294143 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1723b820-73ac-49f3-8716-283bf2c05925/cinder-api-log/0.log" Feb 19 16:43:45 crc kubenswrapper[4810]: I0219 16:43:45.543932 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_f66b86b2-b164-4380-8a89-bb0cf5f833ef/probe/0.log" Feb 19 16:43:45 crc kubenswrapper[4810]: I0219 16:43:45.645580 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_f66b86b2-b164-4380-8a89-bb0cf5f833ef/cinder-backup/0.log" Feb 19 16:43:45 crc kubenswrapper[4810]: I0219 16:43:45.696243 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1723b820-73ac-49f3-8716-283bf2c05925/cinder-api/0.log" Feb 19 16:43:45 crc kubenswrapper[4810]: I0219 16:43:45.797193 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_48d5e3e9-853c-4988-8746-a6f74e1fe209/cinder-scheduler/0.log" Feb 19 16:43:45 crc kubenswrapper[4810]: I0219 16:43:45.806439 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_48d5e3e9-853c-4988-8746-a6f74e1fe209/probe/0.log" Feb 19 16:43:46 crc kubenswrapper[4810]: I0219 16:43:46.032529 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_20a46eb8-508d-45be-bf13-31aed23d1582/cinder-volume/0.log" Feb 19 16:43:46 crc kubenswrapper[4810]: I0219 16:43:46.292621 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_74a12495-8d82-4296-9328-430af6d923b2/probe/0.log" Feb 19 16:43:46 crc kubenswrapper[4810]: I0219 16:43:46.297924 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_20a46eb8-508d-45be-bf13-31aed23d1582/probe/0.log" Feb 19 16:43:46 crc kubenswrapper[4810]: I0219 16:43:46.504491 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_74a12495-8d82-4296-9328-430af6d923b2/cinder-volume/0.log" Feb 19 16:43:46 crc kubenswrapper[4810]: I0219 16:43:46.554562 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-6q498_2cff3a3e-0543-4fec-8f5b-5421be276386/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:46 crc kubenswrapper[4810]: I0219 16:43:46.722562 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-cm25d_7e1f4472-242a-40a0-a574-9c3119fdb705/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:46 crc kubenswrapper[4810]: I0219 16:43:46.756171 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-685d6df875-6hghq_7c074feb-2f7c-4f84-9ea8-5a9062e6b10a/init/0.log" Feb 19 16:43:46 crc kubenswrapper[4810]: I0219 16:43:46.945728 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-685d6df875-6hghq_7c074feb-2f7c-4f84-9ea8-5a9062e6b10a/init/0.log" Feb 19 16:43:47 crc kubenswrapper[4810]: I0219 16:43:47.001353 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-flmfl_e6255c5c-26d4-421f-9156-1bdd2f5adcc6/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:47 crc kubenswrapper[4810]: I0219 16:43:47.139313 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-685d6df875-6hghq_7c074feb-2f7c-4f84-9ea8-5a9062e6b10a/dnsmasq-dns/0.log" Feb 19 16:43:47 crc kubenswrapper[4810]: I0219 16:43:47.211830 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_41a4af93-6f80-4097-a964-2e3f3055fd3b/glance-httpd/0.log" Feb 19 16:43:47 crc kubenswrapper[4810]: I0219 16:43:47.242838 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_41a4af93-6f80-4097-a964-2e3f3055fd3b/glance-log/0.log" Feb 19 16:43:47 crc kubenswrapper[4810]: I0219 16:43:47.416381 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4/glance-log/0.log" Feb 19 16:43:47 crc kubenswrapper[4810]: I0219 16:43:47.436184 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4/glance-httpd/0.log" Feb 19 16:43:47 crc kubenswrapper[4810]: I0219 16:43:47.757877 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f56498b8d-9gwmf_737d6629-747f-4d16-a545-d0070c20fe5d/horizon/0.log" Feb 19 16:43:47 crc kubenswrapper[4810]: I0219 16:43:47.758140 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2_31bd8fe5-f0b6-4463-a545-bdeb0c33b182/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:47 crc kubenswrapper[4810]: I0219 16:43:47.979490 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-s8kk5_12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:48 crc kubenswrapper[4810]: I0219 16:43:48.189621 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29525281-26qqv_8984eff3-6c82-4e2f-8bd6-1e820a450874/keystone-cron/0.log" Feb 19 16:43:48 crc kubenswrapper[4810]: I0219 16:43:48.370293 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f56498b8d-9gwmf_737d6629-747f-4d16-a545-d0070c20fe5d/horizon-log/0.log" Feb 19 16:43:48 crc kubenswrapper[4810]: I0219 16:43:48.461191 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9358dbee-2e5b-432d-98e0-6945d2e0d44b/kube-state-metrics/0.log" Feb 19 16:43:48 crc kubenswrapper[4810]: I0219 16:43:48.495473 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6cd8bf58f4-ktsjk_95165d88-ea72-4785-8c1a-eea4d54466fb/keystone-api/0.log" Feb 19 16:43:48 crc kubenswrapper[4810]: I0219 16:43:48.631150 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-mxd44_b0d687e9-21b0-4abe-b7ec-4fb050926f6c/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:49 crc kubenswrapper[4810]: I0219 16:43:49.111053 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs_6650a3db-fdc1-4342-b8a8-cb91376e75c5/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:49 crc kubenswrapper[4810]: I0219 16:43:49.215997 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dfcf65577-bd5w2_6528bdfd-3389-4776-826e-164fc5117682/neutron-api/0.log" Feb 19 16:43:49 crc kubenswrapper[4810]: I0219 16:43:49.281760 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dfcf65577-bd5w2_6528bdfd-3389-4776-826e-164fc5117682/neutron-httpd/0.log" Feb 19 16:43:49 crc kubenswrapper[4810]: I0219 16:43:49.361267 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c/setup-container/0.log" Feb 19 16:43:49 crc kubenswrapper[4810]: I0219 16:43:49.596293 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c/rabbitmq/0.log" Feb 19 16:43:49 crc kubenswrapper[4810]: I0219 16:43:49.610751 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c/setup-container/0.log" Feb 19 16:43:50 crc kubenswrapper[4810]: I0219 16:43:50.119349 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_65e6588c-3b7f-4719-beb6-90229629820f/nova-cell0-conductor-conductor/0.log" Feb 19 16:43:50 crc kubenswrapper[4810]: I0219 16:43:50.466979 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f93aa728-7924-4a75-ad48-cc174764cf3e/nova-cell1-conductor-conductor/0.log" Feb 19 16:43:50 crc kubenswrapper[4810]: I0219 16:43:50.723180 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5a7915d4-6c3f-4bc7-b21d-7d51b675640f/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 16:43:51 crc kubenswrapper[4810]: I0219 16:43:51.045736 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-nv8wh_cc5014f8-e5aa-47ad-8787-c187b0f7f0e1/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:51 crc kubenswrapper[4810]: I0219 16:43:51.232173 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6397af05-d030-46c2-8a0f-a90beb9b2502/nova-api-log/0.log" Feb 19 16:43:51 crc kubenswrapper[4810]: I0219 16:43:51.295832 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f36ad344-e946-4221-892d-3ffe8fbdd59b/nova-metadata-log/0.log" Feb 19 16:43:51 crc kubenswrapper[4810]: I0219 16:43:51.559643 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6397af05-d030-46c2-8a0f-a90beb9b2502/nova-api-api/0.log" Feb 19 16:43:52 crc kubenswrapper[4810]: I0219 16:43:52.250230 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_30d11a24-9722-4e7a-9be5-f2bd00128167/mysql-bootstrap/0.log" Feb 19 16:43:52 crc kubenswrapper[4810]: I0219 16:43:52.269364 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4/nova-scheduler-scheduler/0.log" Feb 19 16:43:52 crc kubenswrapper[4810]: I0219 16:43:52.461534 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_30d11a24-9722-4e7a-9be5-f2bd00128167/mysql-bootstrap/0.log" Feb 19 16:43:52 crc kubenswrapper[4810]: I0219 16:43:52.485698 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_30d11a24-9722-4e7a-9be5-f2bd00128167/galera/0.log" Feb 19 16:43:52 crc kubenswrapper[4810]: I0219 16:43:52.642418 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c0ffb8ce-a356-4416-b96c-49db30ff1947/mysql-bootstrap/0.log" Feb 19 16:43:52 crc kubenswrapper[4810]: I0219 16:43:52.854205 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c0ffb8ce-a356-4416-b96c-49db30ff1947/mysql-bootstrap/0.log" Feb 19 16:43:52 crc kubenswrapper[4810]: I0219 16:43:52.873287 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c0ffb8ce-a356-4416-b96c-49db30ff1947/galera/0.log" Feb 19 16:43:53 crc kubenswrapper[4810]: I0219 16:43:53.678597 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f36ad344-e946-4221-892d-3ffe8fbdd59b/nova-metadata-metadata/0.log" Feb 19 16:43:53 crc kubenswrapper[4810]: I0219 16:43:53.795911 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ca8eb29b-bb26-446f-8a22-5da13ff9d5fa/openstackclient/0.log" Feb 19 16:43:53 crc kubenswrapper[4810]: I0219 16:43:53.864414 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tbt28_c03aad2b-8ca1-4310-8c11-3287fafcd66f/openstack-network-exporter/0.log" Feb 19 16:43:54 crc kubenswrapper[4810]: I0219 16:43:54.061829 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5t6ds_542da555-4f39-4dff-b378-5306135244db/ovsdb-server-init/0.log" Feb 19 16:43:54 crc kubenswrapper[4810]: I0219 16:43:54.197281 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5t6ds_542da555-4f39-4dff-b378-5306135244db/ovsdb-server-init/0.log" Feb 19 16:43:54 crc kubenswrapper[4810]: I0219 16:43:54.212656 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5t6ds_542da555-4f39-4dff-b378-5306135244db/ovsdb-server/0.log" Feb 19 16:43:54 crc kubenswrapper[4810]: I0219 16:43:54.387857 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-s5488_4a4fa57b-aa00-4866-b31e-df29f7f86480/ovn-controller/0.log" Feb 19 16:43:54 crc kubenswrapper[4810]: I0219 16:43:54.484762 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5t6ds_542da555-4f39-4dff-b378-5306135244db/ovs-vswitchd/0.log" Feb 19 16:43:54 crc kubenswrapper[4810]: I0219 16:43:54.529591 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5gjxx_4defb710-c07f-4e63-9baf-45f51085abdc/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:54 crc kubenswrapper[4810]: I0219 16:43:54.651116 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22facf67-088b-410b-986a-c9e09b3d8feb/openstack-network-exporter/0.log" Feb 19 16:43:54 crc kubenswrapper[4810]: I0219 16:43:54.771536 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22facf67-088b-410b-986a-c9e09b3d8feb/ovn-northd/0.log" Feb 19 16:43:54 crc kubenswrapper[4810]: I0219 16:43:54.876372 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bdffb5e6-13bb-4c08-ad3c-52d8ded85431/openstack-network-exporter/0.log" Feb 19 16:43:54 crc kubenswrapper[4810]: I0219 16:43:54.915030 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bdffb5e6-13bb-4c08-ad3c-52d8ded85431/ovsdbserver-nb/0.log" Feb 19 16:43:54 crc kubenswrapper[4810]: I0219 16:43:54.994872 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5b985124-01b7-430c-b5ea-b9fd095e5f5e/openstack-network-exporter/0.log" Feb 19 16:43:55 crc kubenswrapper[4810]: I0219 16:43:55.087536 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5b985124-01b7-430c-b5ea-b9fd095e5f5e/ovsdbserver-sb/0.log" Feb 19 16:43:55 crc kubenswrapper[4810]: I0219 16:43:55.367431 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7555d68ddd-xqj8c_565eac29-daec-4b40-bcb7-751696560c3a/placement-api/0.log" Feb 19 16:43:55 crc kubenswrapper[4810]: I0219 16:43:55.404938 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bf65af35-1e80-49a0-ada2-3bd027193193/init-config-reloader/0.log" Feb 19 16:43:55 crc kubenswrapper[4810]: I0219 16:43:55.556210 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bf65af35-1e80-49a0-ada2-3bd027193193/config-reloader/0.log" Feb 19 16:43:55 crc kubenswrapper[4810]: I0219 16:43:55.558354 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7555d68ddd-xqj8c_565eac29-daec-4b40-bcb7-751696560c3a/placement-log/0.log" Feb 19 16:43:55 crc kubenswrapper[4810]: I0219 16:43:55.609984 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bf65af35-1e80-49a0-ada2-3bd027193193/init-config-reloader/0.log" Feb 19 16:43:55 crc kubenswrapper[4810]: I0219 16:43:55.647169 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bf65af35-1e80-49a0-ada2-3bd027193193/prometheus/0.log" Feb 19 16:43:55 crc kubenswrapper[4810]: I0219 16:43:55.732395 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bf65af35-1e80-49a0-ada2-3bd027193193/thanos-sidecar/0.log" Feb 19 16:43:55 crc kubenswrapper[4810]: I0219 16:43:55.854760 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_03247cdb-4055-4d47-b433-848e363768ab/setup-container/0.log" Feb 19 16:43:56 crc kubenswrapper[4810]: I0219 16:43:56.019895 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_03247cdb-4055-4d47-b433-848e363768ab/setup-container/0.log" Feb 19 16:43:56 crc kubenswrapper[4810]: I0219 16:43:56.039510 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_03247cdb-4055-4d47-b433-848e363768ab/rabbitmq/0.log" Feb 19 16:43:56 crc kubenswrapper[4810]: I0219 16:43:56.081437 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b86448c3-669a-4132-b8ab-4db06347fa10/setup-container/0.log" Feb 19 16:43:56 crc kubenswrapper[4810]: I0219 16:43:56.258761 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b86448c3-669a-4132-b8ab-4db06347fa10/setup-container/0.log" Feb 19 16:43:56 crc kubenswrapper[4810]: I0219 16:43:56.320720 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-ps669_69d67433-38d6-4368-a621-254a97b0c619/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:56 crc kubenswrapper[4810]: I0219 16:43:56.340722 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b86448c3-669a-4132-b8ab-4db06347fa10/rabbitmq/0.log" Feb 19 16:43:56 crc kubenswrapper[4810]: I0219 16:43:56.505594 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-x748x_32dc9563-791b-421e-a807-41cc1e775b3a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:56 crc kubenswrapper[4810]: I0219 16:43:56.676088 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-484qb_8c05e8c7-82f6-4ef1-a576-3c84e70dc570/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:56 crc kubenswrapper[4810]: I0219 16:43:56.707851 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-rf56v_e77512a1-b460-4008-9e59-5b38f3e9f925/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:56 crc kubenswrapper[4810]: I0219 16:43:56.875856 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gw579_e3132ed5-687d-4cd1-a539-35c4766a27c1/ssh-known-hosts-edpm-deployment/0.log" Feb 19 16:43:57 crc kubenswrapper[4810]: I0219 16:43:57.135788 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78bc5d479f-k79xx_9190a865-226b-487c-b0f9-2573f50f0eab/proxy-server/0.log" Feb 19 16:43:57 crc kubenswrapper[4810]: I0219 16:43:57.229552 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78bc5d479f-k79xx_9190a865-226b-487c-b0f9-2573f50f0eab/proxy-httpd/0.log" Feb 19 16:43:57 crc kubenswrapper[4810]: I0219 16:43:57.245288 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-hrdll_6c36f3e5-f790-4eda-9486-174f8624dad1/swift-ring-rebalance/0.log" Feb 19 16:43:57 crc kubenswrapper[4810]: I0219 16:43:57.482506 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/account-auditor/0.log" Feb 19 16:43:57 crc kubenswrapper[4810]: I0219 16:43:57.483118 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/account-reaper/0.log" Feb 19 16:43:57 crc kubenswrapper[4810]: I0219 16:43:57.501962 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/account-replicator/0.log" Feb 19 16:43:57 crc kubenswrapper[4810]: I0219 16:43:57.657954 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/account-server/0.log" Feb 19 16:43:57 crc kubenswrapper[4810]: I0219 16:43:57.671673 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/container-server/0.log" Feb 19 16:43:57 crc kubenswrapper[4810]: I0219 16:43:57.724810 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/container-auditor/0.log" Feb 19 16:43:57 crc kubenswrapper[4810]: I0219 16:43:57.725938 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/container-replicator/0.log" Feb 19 16:43:57 crc kubenswrapper[4810]: I0219 16:43:57.839712 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/container-updater/0.log" Feb 19 16:43:57 crc kubenswrapper[4810]: I0219 16:43:57.890934 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/object-auditor/0.log" Feb 19 16:43:57 crc kubenswrapper[4810]: I0219 16:43:57.908426 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/object-expirer/0.log" Feb 19 16:43:58 crc kubenswrapper[4810]: I0219 16:43:58.011349 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/object-server/0.log" Feb 19 16:43:58 crc kubenswrapper[4810]: I0219 16:43:58.021842 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/object-replicator/0.log" Feb 19 16:43:58 crc kubenswrapper[4810]: I0219 16:43:58.114670 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/rsync/0.log" Feb 19 16:43:58 crc kubenswrapper[4810]: I0219 16:43:58.119135 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/object-updater/0.log" Feb 19 16:43:58 crc kubenswrapper[4810]: I0219 16:43:58.216922 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/swift-recon-cron/0.log" Feb 19 16:43:58 crc kubenswrapper[4810]: I0219 16:43:58.328820 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc_f7ca8c9a-db61-400f-9319-21590462f929/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:58 crc kubenswrapper[4810]: I0219 16:43:58.429705 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a4c017a9-c049-4baa-acc0-e08a25437c90/tempest-tests-tempest-tests-runner/0.log" Feb 19 16:43:58 crc kubenswrapper[4810]: I0219 16:43:58.490663 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c/test-operator-logs-container/0.log" Feb 19 16:43:58 crc kubenswrapper[4810]: I0219 16:43:58.659313 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7_412dc62a-d25e-4820-947b-582e310ddff1/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:43:59 crc kubenswrapper[4810]: I0219 16:43:59.411692 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_79f3ef20-3f3d-4fa2-8888-36d421303dfd/watcher-applier/0.log" Feb 19 16:44:00 crc kubenswrapper[4810]: I0219 16:44:00.055261 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_c11a7f60-4839-44aa-8615-98de657221f4/watcher-api-log/0.log" Feb 19 16:44:02 crc kubenswrapper[4810]: I0219 16:44:02.692664 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d/watcher-decision-engine/0.log" Feb 19 16:44:03 crc kubenswrapper[4810]: I0219 16:44:03.576850 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_c11a7f60-4839-44aa-8615-98de657221f4/watcher-api/0.log" Feb 19 16:44:03 crc kubenswrapper[4810]: I0219 16:44:03.668653 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_eb773d46-7b9f-4ca4-b33c-9b800b9eafd7/memcached/0.log" Feb 19 16:44:19 crc kubenswrapper[4810]: I0219 16:44:19.537229 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:44:19 crc kubenswrapper[4810]: I0219 16:44:19.537869 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:44:30 crc kubenswrapper[4810]: I0219 16:44:30.315291 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/util/0.log" Feb 19 16:44:30 crc kubenswrapper[4810]: I0219 16:44:30.500679 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/pull/0.log" Feb 19 16:44:30 crc kubenswrapper[4810]: I0219 16:44:30.538302 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/util/0.log" Feb 19 16:44:30 crc kubenswrapper[4810]: I0219 16:44:30.548557 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/pull/0.log" Feb 19 16:44:30 crc kubenswrapper[4810]: I0219 16:44:30.912397 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/util/0.log" Feb 19 16:44:30 crc kubenswrapper[4810]: I0219 16:44:30.930180 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/extract/0.log" Feb 19 16:44:31 crc kubenswrapper[4810]: I0219 16:44:31.146216 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/pull/0.log" Feb 19 16:44:31 crc kubenswrapper[4810]: I0219 16:44:31.494368 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-z5fb9_52bb990c-eff0-4673-be27-d55d433bef0d/manager/0.log" Feb 19 16:44:31 crc kubenswrapper[4810]: I0219 16:44:31.943074 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-qz68t_2106e7b5-bb83-464a-a43f-943f22b55078/manager/0.log" Feb 19 16:44:32 crc kubenswrapper[4810]: I0219 16:44:32.155079 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-ffm66_e2942952-ce19-4053-91da-05623c954167/manager/0.log" Feb 19 16:44:32 crc kubenswrapper[4810]: I0219 16:44:32.361606 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-gnmlp_f0ab3643-d267-4902-af1f-cbcbdd7e5e41/manager/0.log" Feb 19 16:44:33 crc kubenswrapper[4810]: I0219 16:44:33.120404 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-2kkhl_4898d4eb-d474-44bc-9a38-e36f300d132f/manager/0.log" Feb 19 16:44:33 crc kubenswrapper[4810]: I0219 16:44:33.249257 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-nnps5_602535d1-0abe-471e-8409-31319af7bd4b/manager/0.log" Feb 19 16:44:33 crc kubenswrapper[4810]: I0219 16:44:33.537035 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-mkfsc_2126b31b-0444-43e4-a250-837f37d476aa/manager/0.log" Feb 19 16:44:33 crc kubenswrapper[4810]: I0219 16:44:33.772912 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-vc7cw_942f40af-0498-4865-99da-bdcd068ef449/manager/0.log" Feb 19 16:44:33 crc kubenswrapper[4810]: I0219 16:44:33.789650 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-jxmt5_1217b757-0f1c-4c4e-9abe-55875992915d/manager/0.log" Feb 19 16:44:34 crc kubenswrapper[4810]: I0219 16:44:34.040047 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-6fqmd_e4a54646-39cf-4e42-9367-487ea4f7d8a4/manager/0.log" Feb 19 16:44:34 crc kubenswrapper[4810]: I0219 16:44:34.090830 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-jjqv2_4aeb7cba-c1db-4dd6-92f7-dae7bd2e3f65/manager/0.log" Feb 19 16:44:34 crc kubenswrapper[4810]: I0219 16:44:34.395668 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-l67cq_fef4ad67-ccc8-4b32-bfb8-38dd6aa8e07e/manager/0.log" Feb 19 16:44:34 crc kubenswrapper[4810]: I0219 16:44:34.493875 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt_c677bdd0-7248-4b02-9ab4-035c034a976a/manager/0.log" Feb 19 16:44:34 crc kubenswrapper[4810]: I0219 16:44:34.892755 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-69cffcd4f6-27gzn_e84ef702-2f13-42e9-ae2b-6f1465b67ff3/operator/0.log" Feb 19 16:44:35 crc kubenswrapper[4810]: I0219 16:44:35.601951 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gkft8_09f49ae7-b6fb-4ca5-9238-8bcf8d15ea95/registry-server/0.log" Feb 19 16:44:35 crc kubenswrapper[4810]: I0219 16:44:35.885228 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-5xnwd_e163eac0-ea1f-4002-9469-844240d7a44c/manager/0.log" Feb 19 16:44:36 crc kubenswrapper[4810]: I0219 16:44:36.124159 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-7tzvr_9f5779a5-4cda-40dc-831d-950f97eae317/manager/0.log" Feb 19 16:44:36 crc kubenswrapper[4810]: I0219 16:44:36.290686 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-k98c8_64ed590e-59b6-44c8-baee-324162d099b8/operator/0.log" Feb 19 16:44:36 crc kubenswrapper[4810]: I0219 16:44:36.477454 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-t44nb_aa5063d7-2358-4149-a3b9-ef2ce138faf4/manager/0.log" Feb 19 16:44:36 crc kubenswrapper[4810]: I0219 16:44:36.938579 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-pw9kt_69b7e96d-bce6-4653-998e-3bf5d159ae5a/manager/0.log" Feb 19 16:44:36 crc kubenswrapper[4810]: I0219 16:44:36.971307 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-px9zx_eaed166e-39b5-45ca-8a65-a22710d5fe37/manager/0.log" Feb 19 16:44:37 crc kubenswrapper[4810]: I0219 16:44:37.266188 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-vcbwg_3cff7e0c-86c7-4029-aefe-a7f7e8e2d76d/manager/0.log" Feb 19 16:44:37 crc kubenswrapper[4810]: I0219 16:44:37.389431 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-798847869b-dlmvg_9c5af548-c722-4e6b-9309-1420838257e0/manager/0.log" Feb 19 16:44:37 crc kubenswrapper[4810]: I0219 16:44:37.561418 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6d464797d7-lrlqc_a6f83f3c-26f4-472f-9fcd-ae8049f1819a/manager/0.log" Feb 19 16:44:42 crc kubenswrapper[4810]: I0219 16:44:42.930704 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-mzslt_91002269-9fe0-44d2-9dbd-9e4cf58274bf/manager/0.log" Feb 19 16:44:49 crc kubenswrapper[4810]: I0219 16:44:49.538129 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:44:49 crc kubenswrapper[4810]: I0219 16:44:49.538868 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.145268 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt"] Feb 19 16:45:00 crc kubenswrapper[4810]: E0219 16:45:00.146172 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35bc1c3-d664-4faa-89db-dbddd7c714f3" containerName="container-00" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.146184 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35bc1c3-d664-4faa-89db-dbddd7c714f3" containerName="container-00" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.146409 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="d35bc1c3-d664-4faa-89db-dbddd7c714f3" containerName="container-00" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.147155 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.148927 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.149591 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.176614 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt"] Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.251865 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/642d0068-2155-46d5-85c6-d4f70d142f81-secret-volume\") pod \"collect-profiles-29525325-x9zxt\" (UID: \"642d0068-2155-46d5-85c6-d4f70d142f81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.251955 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/642d0068-2155-46d5-85c6-d4f70d142f81-config-volume\") pod \"collect-profiles-29525325-x9zxt\" (UID: \"642d0068-2155-46d5-85c6-d4f70d142f81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.252186 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gj6b\" (UniqueName: \"kubernetes.io/projected/642d0068-2155-46d5-85c6-d4f70d142f81-kube-api-access-2gj6b\") pod \"collect-profiles-29525325-x9zxt\" (UID: \"642d0068-2155-46d5-85c6-d4f70d142f81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.353499 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/642d0068-2155-46d5-85c6-d4f70d142f81-config-volume\") pod \"collect-profiles-29525325-x9zxt\" (UID: \"642d0068-2155-46d5-85c6-d4f70d142f81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.353673 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gj6b\" (UniqueName: \"kubernetes.io/projected/642d0068-2155-46d5-85c6-d4f70d142f81-kube-api-access-2gj6b\") pod \"collect-profiles-29525325-x9zxt\" (UID: \"642d0068-2155-46d5-85c6-d4f70d142f81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.353722 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/642d0068-2155-46d5-85c6-d4f70d142f81-secret-volume\") pod \"collect-profiles-29525325-x9zxt\" (UID: \"642d0068-2155-46d5-85c6-d4f70d142f81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.354649 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/642d0068-2155-46d5-85c6-d4f70d142f81-config-volume\") pod \"collect-profiles-29525325-x9zxt\" (UID: \"642d0068-2155-46d5-85c6-d4f70d142f81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.372983 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/642d0068-2155-46d5-85c6-d4f70d142f81-secret-volume\") pod \"collect-profiles-29525325-x9zxt\" (UID: \"642d0068-2155-46d5-85c6-d4f70d142f81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.374667 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gj6b\" (UniqueName: \"kubernetes.io/projected/642d0068-2155-46d5-85c6-d4f70d142f81-kube-api-access-2gj6b\") pod \"collect-profiles-29525325-x9zxt\" (UID: \"642d0068-2155-46d5-85c6-d4f70d142f81\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.464830 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.924036 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt"] Feb 19 16:45:00 crc kubenswrapper[4810]: I0219 16:45:00.978529 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-895xv_64dc0d58-11d4-456b-97ab-a4d3ec28225b/control-plane-machine-set-operator/0.log" Feb 19 16:45:01 crc kubenswrapper[4810]: I0219 16:45:01.183860 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-l66cb_9a7776ca-1a56-4eca-9e44-ba1b7b15510f/kube-rbac-proxy/0.log" Feb 19 16:45:01 crc kubenswrapper[4810]: I0219 16:45:01.306435 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-l66cb_9a7776ca-1a56-4eca-9e44-ba1b7b15510f/machine-api-operator/0.log" Feb 19 16:45:01 crc kubenswrapper[4810]: I0219 16:45:01.382631 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" event={"ID":"642d0068-2155-46d5-85c6-d4f70d142f81","Type":"ContainerStarted","Data":"0bfef661b9df9370b288185a7c4e0a21adced38cc06cfc5bcc7c15675f009750"} Feb 19 16:45:01 crc kubenswrapper[4810]: I0219 16:45:01.382685 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" event={"ID":"642d0068-2155-46d5-85c6-d4f70d142f81","Type":"ContainerStarted","Data":"5d6972445d499d27164899c98a607acd2c10498b607f1c6b6cf8700155a05b83"} Feb 19 16:45:02 crc kubenswrapper[4810]: I0219 16:45:02.397446 4810 generic.go:334] "Generic (PLEG): container finished" podID="642d0068-2155-46d5-85c6-d4f70d142f81" containerID="0bfef661b9df9370b288185a7c4e0a21adced38cc06cfc5bcc7c15675f009750" exitCode=0 Feb 19 16:45:02 crc kubenswrapper[4810]: I0219 16:45:02.397499 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" event={"ID":"642d0068-2155-46d5-85c6-d4f70d142f81","Type":"ContainerDied","Data":"0bfef661b9df9370b288185a7c4e0a21adced38cc06cfc5bcc7c15675f009750"} Feb 19 16:45:02 crc kubenswrapper[4810]: I0219 16:45:02.764342 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" Feb 19 16:45:02 crc kubenswrapper[4810]: I0219 16:45:02.807433 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/642d0068-2155-46d5-85c6-d4f70d142f81-config-volume\") pod \"642d0068-2155-46d5-85c6-d4f70d142f81\" (UID: \"642d0068-2155-46d5-85c6-d4f70d142f81\") " Feb 19 16:45:02 crc kubenswrapper[4810]: I0219 16:45:02.807634 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/642d0068-2155-46d5-85c6-d4f70d142f81-secret-volume\") pod \"642d0068-2155-46d5-85c6-d4f70d142f81\" (UID: \"642d0068-2155-46d5-85c6-d4f70d142f81\") " Feb 19 16:45:02 crc kubenswrapper[4810]: I0219 16:45:02.807736 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gj6b\" (UniqueName: \"kubernetes.io/projected/642d0068-2155-46d5-85c6-d4f70d142f81-kube-api-access-2gj6b\") pod \"642d0068-2155-46d5-85c6-d4f70d142f81\" (UID: \"642d0068-2155-46d5-85c6-d4f70d142f81\") " Feb 19 16:45:02 crc kubenswrapper[4810]: I0219 16:45:02.809776 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/642d0068-2155-46d5-85c6-d4f70d142f81-config-volume" (OuterVolumeSpecName: "config-volume") pod "642d0068-2155-46d5-85c6-d4f70d142f81" (UID: "642d0068-2155-46d5-85c6-d4f70d142f81"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 16:45:02 crc kubenswrapper[4810]: I0219 16:45:02.814774 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/642d0068-2155-46d5-85c6-d4f70d142f81-kube-api-access-2gj6b" (OuterVolumeSpecName: "kube-api-access-2gj6b") pod "642d0068-2155-46d5-85c6-d4f70d142f81" (UID: "642d0068-2155-46d5-85c6-d4f70d142f81"). InnerVolumeSpecName "kube-api-access-2gj6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:45:02 crc kubenswrapper[4810]: I0219 16:45:02.815071 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/642d0068-2155-46d5-85c6-d4f70d142f81-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "642d0068-2155-46d5-85c6-d4f70d142f81" (UID: "642d0068-2155-46d5-85c6-d4f70d142f81"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 16:45:02 crc kubenswrapper[4810]: I0219 16:45:02.909738 4810 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/642d0068-2155-46d5-85c6-d4f70d142f81-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 16:45:02 crc kubenswrapper[4810]: I0219 16:45:02.909946 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gj6b\" (UniqueName: \"kubernetes.io/projected/642d0068-2155-46d5-85c6-d4f70d142f81-kube-api-access-2gj6b\") on node \"crc\" DevicePath \"\"" Feb 19 16:45:02 crc kubenswrapper[4810]: I0219 16:45:02.910039 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/642d0068-2155-46d5-85c6-d4f70d142f81-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 16:45:03 crc kubenswrapper[4810]: I0219 16:45:03.409270 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" event={"ID":"642d0068-2155-46d5-85c6-d4f70d142f81","Type":"ContainerDied","Data":"5d6972445d499d27164899c98a607acd2c10498b607f1c6b6cf8700155a05b83"} Feb 19 16:45:03 crc kubenswrapper[4810]: I0219 16:45:03.409348 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d6972445d499d27164899c98a607acd2c10498b607f1c6b6cf8700155a05b83" Feb 19 16:45:03 crc kubenswrapper[4810]: I0219 16:45:03.410738 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525325-x9zxt" Feb 19 16:45:03 crc kubenswrapper[4810]: I0219 16:45:03.842324 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd"] Feb 19 16:45:03 crc kubenswrapper[4810]: I0219 16:45:03.855514 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525280-gjtpd"] Feb 19 16:45:05 crc kubenswrapper[4810]: I0219 16:45:05.453789 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1327b7dc-e5ad-463c-8ca9-89b735b1fec2" path="/var/lib/kubelet/pods/1327b7dc-e5ad-463c-8ca9-89b735b1fec2/volumes" Feb 19 16:45:17 crc kubenswrapper[4810]: I0219 16:45:17.208869 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-fwh4x_02206e32-6f49-407e-a02b-ce61e3daabf6/cert-manager-controller/0.log" Feb 19 16:45:17 crc kubenswrapper[4810]: I0219 16:45:17.409375 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-x4csq_54e755f0-9c2f-4d47-9979-b7b92996bab6/cert-manager-cainjector/0.log" Feb 19 16:45:17 crc kubenswrapper[4810]: I0219 16:45:17.461288 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-7lspv_1224bbe4-6d8e-410e-8990-3813efdd2003/cert-manager-webhook/0.log" Feb 19 16:45:19 crc kubenswrapper[4810]: I0219 16:45:19.537977 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:45:19 crc kubenswrapper[4810]: I0219 16:45:19.538429 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:45:19 crc kubenswrapper[4810]: I0219 16:45:19.538487 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 16:45:19 crc kubenswrapper[4810]: I0219 16:45:19.539477 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 16:45:19 crc kubenswrapper[4810]: I0219 16:45:19.539539 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" gracePeriod=600 Feb 19 16:45:19 crc kubenswrapper[4810]: E0219 16:45:19.677192 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:45:20 crc kubenswrapper[4810]: I0219 16:45:20.594161 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" exitCode=0 Feb 19 16:45:20 crc kubenswrapper[4810]: I0219 16:45:20.594308 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e"} Feb 19 16:45:20 crc kubenswrapper[4810]: I0219 16:45:20.594629 4810 scope.go:117] "RemoveContainer" containerID="b8edda1f9342dacaf59cc4da7ca5a5e6fa7b1be00b10f0e3774c25fffdb24625" Feb 19 16:45:20 crc kubenswrapper[4810]: I0219 16:45:20.595771 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:45:20 crc kubenswrapper[4810]: E0219 16:45:20.596365 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:45:33 crc kubenswrapper[4810]: I0219 16:45:33.026120 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-kdwwx_35fc682a-0cc9-4922-a2f2-60da1ddb1eb9/nmstate-console-plugin/0.log" Feb 19 16:45:33 crc kubenswrapper[4810]: I0219 16:45:33.195262 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bhhvv_c0eb0835-6df5-4a21-b309-f178a032d027/nmstate-handler/0.log" Feb 19 16:45:33 crc kubenswrapper[4810]: I0219 16:45:33.229831 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4g952_ce589619-7c2f-43db-ae4f-fb43be7b07f4/kube-rbac-proxy/0.log" Feb 19 16:45:33 crc kubenswrapper[4810]: I0219 16:45:33.278707 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4g952_ce589619-7c2f-43db-ae4f-fb43be7b07f4/nmstate-metrics/0.log" Feb 19 16:45:33 crc kubenswrapper[4810]: I0219 16:45:33.405273 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-p4hwg_f8300a06-7526-4da5-89a6-7fff8ff284c9/nmstate-operator/0.log" Feb 19 16:45:33 crc kubenswrapper[4810]: I0219 16:45:33.520889 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-ckvvq_db05e782-a3d7-4cbe-be3f-f6226d894864/nmstate-webhook/0.log" Feb 19 16:45:34 crc kubenswrapper[4810]: I0219 16:45:34.441648 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:45:34 crc kubenswrapper[4810]: E0219 16:45:34.442275 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:45:45 crc kubenswrapper[4810]: I0219 16:45:45.059103 4810 scope.go:117] "RemoveContainer" containerID="4e99f6d1c84e426443c4e13972b553f91aa0857b582f33dd75b9fc978d8acc56" Feb 19 16:45:49 crc kubenswrapper[4810]: I0219 16:45:49.439422 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:45:49 crc kubenswrapper[4810]: E0219 16:45:49.440312 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:45:51 crc kubenswrapper[4810]: I0219 16:45:51.603096 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6jkkz_1656f52d-7771-4bbb-9642-b296d16b791e/prometheus-operator/0.log" Feb 19 16:45:51 crc kubenswrapper[4810]: I0219 16:45:51.645007 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt_d5debcf2-9629-4bb2-9133-f4b81748ff7d/prometheus-operator-admission-webhook/0.log" Feb 19 16:45:51 crc kubenswrapper[4810]: I0219 16:45:51.823137 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr_408628c0-0b2c-48f9-b849-ee1b124499e1/prometheus-operator-admission-webhook/0.log" Feb 19 16:45:51 crc kubenswrapper[4810]: I0219 16:45:51.878546 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-dk9c4_8bdf030e-92d8-45dc-ab6c-a7b241444677/operator/0.log" Feb 19 16:45:52 crc kubenswrapper[4810]: I0219 16:45:52.020801 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-2fdxm_c5968625-c396-4ae0-9846-c2ceb6baf655/perses-operator/0.log" Feb 19 16:46:04 crc kubenswrapper[4810]: I0219 16:46:04.439933 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:46:04 crc kubenswrapper[4810]: E0219 16:46:04.440650 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:46:09 crc kubenswrapper[4810]: I0219 16:46:09.801640 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jngcz_781d467e-8522-43a3-a552-1ceebc40cddd/kube-rbac-proxy/0.log" Feb 19 16:46:09 crc kubenswrapper[4810]: I0219 16:46:09.890410 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jngcz_781d467e-8522-43a3-a552-1ceebc40cddd/controller/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.093012 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-frr-files/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.233088 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-frr-files/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.239319 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-reloader/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.286643 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-metrics/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.294547 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-reloader/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.502676 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-reloader/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.510653 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-frr-files/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.557055 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-metrics/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.562967 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-metrics/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.716723 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-frr-files/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.766750 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-reloader/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.777231 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/controller/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.811270 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-metrics/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.947828 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/kube-rbac-proxy/0.log" Feb 19 16:46:10 crc kubenswrapper[4810]: I0219 16:46:10.950030 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/frr-metrics/0.log" Feb 19 16:46:11 crc kubenswrapper[4810]: I0219 16:46:11.048007 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/kube-rbac-proxy-frr/0.log" Feb 19 16:46:11 crc kubenswrapper[4810]: I0219 16:46:11.181988 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/reloader/0.log" Feb 19 16:46:11 crc kubenswrapper[4810]: I0219 16:46:11.287061 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-cwj24_1ee9f8f3-05a8-4648-b48d-4975285346d7/frr-k8s-webhook-server/0.log" Feb 19 16:46:12 crc kubenswrapper[4810]: I0219 16:46:12.103941 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75f48c59dc-m5vm8_f26047c7-b8cc-4ce2-8a48-4b380ab225c0/manager/0.log" Feb 19 16:46:12 crc kubenswrapper[4810]: I0219 16:46:12.210423 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-595d5f7545-vfb4c_3d62866f-b047-419d-8eb0-848b0df84e63/webhook-server/0.log" Feb 19 16:46:12 crc kubenswrapper[4810]: I0219 16:46:12.530388 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hllgd_c9d97974-67d2-42e5-89fe-b6db106a47c4/kube-rbac-proxy/0.log" Feb 19 16:46:12 crc kubenswrapper[4810]: I0219 16:46:12.533096 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/frr/0.log" Feb 19 16:46:12 crc kubenswrapper[4810]: I0219 16:46:12.751554 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hllgd_c9d97974-67d2-42e5-89fe-b6db106a47c4/speaker/0.log" Feb 19 16:46:18 crc kubenswrapper[4810]: I0219 16:46:18.439622 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:46:18 crc kubenswrapper[4810]: E0219 16:46:18.440562 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:46:29 crc kubenswrapper[4810]: I0219 16:46:29.168456 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/util/0.log" Feb 19 16:46:29 crc kubenswrapper[4810]: I0219 16:46:29.384183 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/util/0.log" Feb 19 16:46:29 crc kubenswrapper[4810]: I0219 16:46:29.390832 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/pull/0.log" Feb 19 16:46:29 crc kubenswrapper[4810]: I0219 16:46:29.408915 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/pull/0.log" Feb 19 16:46:29 crc kubenswrapper[4810]: I0219 16:46:29.439127 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:46:29 crc kubenswrapper[4810]: E0219 16:46:29.439389 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:46:29 crc kubenswrapper[4810]: I0219 16:46:29.585362 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/extract/0.log" Feb 19 16:46:29 crc kubenswrapper[4810]: I0219 16:46:29.588470 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/util/0.log" Feb 19 16:46:29 crc kubenswrapper[4810]: I0219 16:46:29.618618 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/pull/0.log" Feb 19 16:46:29 crc kubenswrapper[4810]: I0219 16:46:29.787084 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/util/0.log" Feb 19 16:46:29 crc kubenswrapper[4810]: I0219 16:46:29.947857 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/pull/0.log" Feb 19 16:46:29 crc kubenswrapper[4810]: I0219 16:46:29.957679 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/util/0.log" Feb 19 16:46:29 crc kubenswrapper[4810]: I0219 16:46:29.968725 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/pull/0.log" Feb 19 16:46:30 crc kubenswrapper[4810]: I0219 16:46:30.111150 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/util/0.log" Feb 19 16:46:30 crc kubenswrapper[4810]: I0219 16:46:30.121607 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/pull/0.log" Feb 19 16:46:30 crc kubenswrapper[4810]: I0219 16:46:30.139817 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/extract/0.log" Feb 19 16:46:30 crc kubenswrapper[4810]: I0219 16:46:30.304372 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/extract-utilities/0.log" Feb 19 16:46:30 crc kubenswrapper[4810]: I0219 16:46:30.488626 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/extract-utilities/0.log" Feb 19 16:46:30 crc kubenswrapper[4810]: I0219 16:46:30.488645 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/extract-content/0.log" Feb 19 16:46:30 crc kubenswrapper[4810]: I0219 16:46:30.519407 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/extract-content/0.log" Feb 19 16:46:30 crc kubenswrapper[4810]: I0219 16:46:30.637741 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/extract-content/0.log" Feb 19 16:46:30 crc kubenswrapper[4810]: I0219 16:46:30.642699 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/extract-utilities/0.log" Feb 19 16:46:30 crc kubenswrapper[4810]: I0219 16:46:30.818973 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/registry-server/0.log" Feb 19 16:46:30 crc kubenswrapper[4810]: I0219 16:46:30.854425 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-78p9d_4070bcdc-bd83-4c82-920b-8cd10671c498/extract-utilities/0.log" Feb 19 16:46:30 crc kubenswrapper[4810]: I0219 16:46:30.993104 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-78p9d_4070bcdc-bd83-4c82-920b-8cd10671c498/extract-content/0.log" Feb 19 16:46:31 crc kubenswrapper[4810]: I0219 16:46:31.029534 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-78p9d_4070bcdc-bd83-4c82-920b-8cd10671c498/extract-utilities/0.log" Feb 19 16:46:31 crc kubenswrapper[4810]: I0219 16:46:31.040889 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-78p9d_4070bcdc-bd83-4c82-920b-8cd10671c498/extract-content/0.log" Feb 19 16:46:31 crc kubenswrapper[4810]: I0219 16:46:31.234770 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-78p9d_4070bcdc-bd83-4c82-920b-8cd10671c498/extract-utilities/0.log" Feb 19 16:46:31 crc kubenswrapper[4810]: I0219 16:46:31.253473 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-78p9d_4070bcdc-bd83-4c82-920b-8cd10671c498/extract-content/0.log" Feb 19 16:46:31 crc kubenswrapper[4810]: I0219 16:46:31.441776 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/util/0.log" Feb 19 16:46:31 crc kubenswrapper[4810]: I0219 16:46:31.663453 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/util/0.log" Feb 19 16:46:31 crc kubenswrapper[4810]: I0219 16:46:31.760914 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-78p9d_4070bcdc-bd83-4c82-920b-8cd10671c498/registry-server/0.log" Feb 19 16:46:31 crc kubenswrapper[4810]: I0219 16:46:31.764860 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/pull/0.log" Feb 19 16:46:31 crc kubenswrapper[4810]: I0219 16:46:31.765105 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/pull/0.log" Feb 19 16:46:31 crc kubenswrapper[4810]: I0219 16:46:31.953625 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/util/0.log" Feb 19 16:46:31 crc kubenswrapper[4810]: I0219 16:46:31.975966 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/pull/0.log" Feb 19 16:46:31 crc kubenswrapper[4810]: I0219 16:46:31.994823 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/extract/0.log" Feb 19 16:46:32 crc kubenswrapper[4810]: I0219 16:46:32.138863 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sm9wk_41d27e40-a89e-4fd6-8106-824c5a257f25/marketplace-operator/0.log" Feb 19 16:46:32 crc kubenswrapper[4810]: I0219 16:46:32.214994 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/extract-utilities/0.log" Feb 19 16:46:32 crc kubenswrapper[4810]: I0219 16:46:32.499800 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/extract-content/0.log" Feb 19 16:46:32 crc kubenswrapper[4810]: I0219 16:46:32.628882 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/extract-utilities/0.log" Feb 19 16:46:32 crc kubenswrapper[4810]: I0219 16:46:32.719997 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/extract-content/0.log" Feb 19 16:46:32 crc kubenswrapper[4810]: I0219 16:46:32.869589 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/extract-content/0.log" Feb 19 16:46:32 crc kubenswrapper[4810]: I0219 16:46:32.870223 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/extract-utilities/0.log" Feb 19 16:46:33 crc kubenswrapper[4810]: I0219 16:46:33.075299 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/registry-server/0.log" Feb 19 16:46:33 crc kubenswrapper[4810]: I0219 16:46:33.122208 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/extract-utilities/0.log" Feb 19 16:46:33 crc kubenswrapper[4810]: I0219 16:46:33.314710 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/extract-utilities/0.log" Feb 19 16:46:33 crc kubenswrapper[4810]: I0219 16:46:33.359700 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/extract-content/0.log" Feb 19 16:46:33 crc kubenswrapper[4810]: I0219 16:46:33.403062 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/extract-content/0.log" Feb 19 16:46:33 crc kubenswrapper[4810]: I0219 16:46:33.509398 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/extract-utilities/0.log" Feb 19 16:46:33 crc kubenswrapper[4810]: I0219 16:46:33.530802 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/extract-content/0.log" Feb 19 16:46:33 crc kubenswrapper[4810]: I0219 16:46:33.840626 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/registry-server/0.log" Feb 19 16:46:41 crc kubenswrapper[4810]: I0219 16:46:41.454148 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:46:41 crc kubenswrapper[4810]: E0219 16:46:41.457832 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:46:48 crc kubenswrapper[4810]: I0219 16:46:48.229146 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt_d5debcf2-9629-4bb2-9133-f4b81748ff7d/prometheus-operator-admission-webhook/0.log" Feb 19 16:46:48 crc kubenswrapper[4810]: I0219 16:46:48.310057 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr_408628c0-0b2c-48f9-b849-ee1b124499e1/prometheus-operator-admission-webhook/0.log" Feb 19 16:46:48 crc kubenswrapper[4810]: I0219 16:46:48.328237 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6jkkz_1656f52d-7771-4bbb-9642-b296d16b791e/prometheus-operator/0.log" Feb 19 16:46:48 crc kubenswrapper[4810]: I0219 16:46:48.416589 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-dk9c4_8bdf030e-92d8-45dc-ab6c-a7b241444677/operator/0.log" Feb 19 16:46:48 crc kubenswrapper[4810]: I0219 16:46:48.501763 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-2fdxm_c5968625-c396-4ae0-9846-c2ceb6baf655/perses-operator/0.log" Feb 19 16:46:53 crc kubenswrapper[4810]: I0219 16:46:53.439581 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:46:53 crc kubenswrapper[4810]: E0219 16:46:53.440452 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:47:06 crc kubenswrapper[4810]: I0219 16:47:06.439140 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:47:06 crc kubenswrapper[4810]: E0219 16:47:06.439896 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:47:18 crc kubenswrapper[4810]: I0219 16:47:18.440051 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:47:18 crc kubenswrapper[4810]: E0219 16:47:18.441001 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:47:32 crc kubenswrapper[4810]: I0219 16:47:32.439463 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:47:32 crc kubenswrapper[4810]: E0219 16:47:32.440584 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:47:44 crc kubenswrapper[4810]: I0219 16:47:44.439591 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:47:44 crc kubenswrapper[4810]: E0219 16:47:44.440638 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:47:58 crc kubenswrapper[4810]: I0219 16:47:58.439372 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:47:58 crc kubenswrapper[4810]: E0219 16:47:58.440214 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:48:13 crc kubenswrapper[4810]: I0219 16:48:13.446809 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:48:13 crc kubenswrapper[4810]: E0219 16:48:13.447706 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:48:26 crc kubenswrapper[4810]: I0219 16:48:26.440317 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:48:26 crc kubenswrapper[4810]: E0219 16:48:26.441406 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:48:37 crc kubenswrapper[4810]: I0219 16:48:37.439509 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:48:37 crc kubenswrapper[4810]: E0219 16:48:37.440748 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:48:48 crc kubenswrapper[4810]: I0219 16:48:48.440406 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:48:48 crc kubenswrapper[4810]: E0219 16:48:48.442108 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:48:49 crc kubenswrapper[4810]: I0219 16:48:49.017886 4810 generic.go:334] "Generic (PLEG): container finished" podID="2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994" containerID="17449341bfb55cd42e3f6347fa4879ccfdfca14899e053d20de9d0995b28213d" exitCode=0 Feb 19 16:48:49 crc kubenswrapper[4810]: I0219 16:48:49.017963 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mt7ln/must-gather-xtkm7" event={"ID":"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994","Type":"ContainerDied","Data":"17449341bfb55cd42e3f6347fa4879ccfdfca14899e053d20de9d0995b28213d"} Feb 19 16:48:49 crc kubenswrapper[4810]: I0219 16:48:49.018976 4810 scope.go:117] "RemoveContainer" containerID="17449341bfb55cd42e3f6347fa4879ccfdfca14899e053d20de9d0995b28213d" Feb 19 16:48:49 crc kubenswrapper[4810]: I0219 16:48:49.421578 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mt7ln_must-gather-xtkm7_2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994/gather/0.log" Feb 19 16:48:57 crc kubenswrapper[4810]: I0219 16:48:57.724007 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mt7ln/must-gather-xtkm7"] Feb 19 16:48:57 crc kubenswrapper[4810]: I0219 16:48:57.724653 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mt7ln/must-gather-xtkm7" podUID="2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994" containerName="copy" containerID="cri-o://61e6a470548c19ba5e113ede1e2ed5b363e40d072f7a28ac5b76e32667d06ba8" gracePeriod=2 Feb 19 16:48:57 crc kubenswrapper[4810]: I0219 16:48:57.737082 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mt7ln/must-gather-xtkm7"] Feb 19 16:48:58 crc kubenswrapper[4810]: I0219 16:48:58.148258 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mt7ln_must-gather-xtkm7_2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994/copy/0.log" Feb 19 16:48:58 crc kubenswrapper[4810]: I0219 16:48:58.149737 4810 generic.go:334] "Generic (PLEG): container finished" podID="2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994" containerID="61e6a470548c19ba5e113ede1e2ed5b363e40d072f7a28ac5b76e32667d06ba8" exitCode=143 Feb 19 16:48:58 crc kubenswrapper[4810]: I0219 16:48:58.149796 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="151c67b89f6c96a03e2c74c97afaae12904a2f90d765b36355e67b20a49b85e4" Feb 19 16:48:58 crc kubenswrapper[4810]: I0219 16:48:58.170726 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mt7ln_must-gather-xtkm7_2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994/copy/0.log" Feb 19 16:48:58 crc kubenswrapper[4810]: I0219 16:48:58.171343 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/must-gather-xtkm7" Feb 19 16:48:58 crc kubenswrapper[4810]: I0219 16:48:58.253538 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994-must-gather-output\") pod \"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994\" (UID: \"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994\") " Feb 19 16:48:58 crc kubenswrapper[4810]: I0219 16:48:58.253763 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67dql\" (UniqueName: \"kubernetes.io/projected/2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994-kube-api-access-67dql\") pod \"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994\" (UID: \"2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994\") " Feb 19 16:48:58 crc kubenswrapper[4810]: I0219 16:48:58.260238 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994-kube-api-access-67dql" (OuterVolumeSpecName: "kube-api-access-67dql") pod "2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994" (UID: "2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994"). InnerVolumeSpecName "kube-api-access-67dql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:48:58 crc kubenswrapper[4810]: I0219 16:48:58.357006 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67dql\" (UniqueName: \"kubernetes.io/projected/2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994-kube-api-access-67dql\") on node \"crc\" DevicePath \"\"" Feb 19 16:48:58 crc kubenswrapper[4810]: I0219 16:48:58.450847 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994" (UID: "2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:48:58 crc kubenswrapper[4810]: I0219 16:48:58.458642 4810 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 16:48:59 crc kubenswrapper[4810]: I0219 16:48:59.159765 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mt7ln/must-gather-xtkm7" Feb 19 16:48:59 crc kubenswrapper[4810]: I0219 16:48:59.449855 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994" path="/var/lib/kubelet/pods/2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994/volumes" Feb 19 16:49:03 crc kubenswrapper[4810]: I0219 16:49:03.440021 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:49:03 crc kubenswrapper[4810]: E0219 16:49:03.440775 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.089859 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wtwq9"] Feb 19 16:49:12 crc kubenswrapper[4810]: E0219 16:49:12.091146 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994" containerName="gather" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.091169 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994" containerName="gather" Feb 19 16:49:12 crc kubenswrapper[4810]: E0219 16:49:12.091228 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0068-2155-46d5-85c6-d4f70d142f81" containerName="collect-profiles" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.091241 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0068-2155-46d5-85c6-d4f70d142f81" containerName="collect-profiles" Feb 19 16:49:12 crc kubenswrapper[4810]: E0219 16:49:12.091272 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994" containerName="copy" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.091285 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994" containerName="copy" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.091685 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994" containerName="copy" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.091712 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8b3bd1-400c-4da2-a6d6-efa7d8bbd994" containerName="gather" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.091742 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="642d0068-2155-46d5-85c6-d4f70d142f81" containerName="collect-profiles" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.094275 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.128685 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wtwq9"] Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.192314 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1-catalog-content\") pod \"community-operators-wtwq9\" (UID: \"7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1\") " pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.192659 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z52wb\" (UniqueName: \"kubernetes.io/projected/7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1-kube-api-access-z52wb\") pod \"community-operators-wtwq9\" (UID: \"7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1\") " pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.192826 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1-utilities\") pod \"community-operators-wtwq9\" (UID: \"7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1\") " pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.295076 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z52wb\" (UniqueName: \"kubernetes.io/projected/7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1-kube-api-access-z52wb\") pod \"community-operators-wtwq9\" (UID: \"7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1\") " pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.295221 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1-utilities\") pod \"community-operators-wtwq9\" (UID: \"7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1\") " pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.295286 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1-catalog-content\") pod \"community-operators-wtwq9\" (UID: \"7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1\") " pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.295935 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1-catalog-content\") pod \"community-operators-wtwq9\" (UID: \"7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1\") " pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.296230 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1-utilities\") pod \"community-operators-wtwq9\" (UID: \"7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1\") " pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.319228 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z52wb\" (UniqueName: \"kubernetes.io/projected/7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1-kube-api-access-z52wb\") pod \"community-operators-wtwq9\" (UID: \"7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1\") " pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:12 crc kubenswrapper[4810]: I0219 16:49:12.431676 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:13 crc kubenswrapper[4810]: I0219 16:49:13.018779 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wtwq9"] Feb 19 16:49:13 crc kubenswrapper[4810]: I0219 16:49:13.332629 4810 generic.go:334] "Generic (PLEG): container finished" podID="7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1" containerID="3875e687f2e4bee41b307a420054102f3b0c17cfc118c7ff51e926bdf160d8f6" exitCode=0 Feb 19 16:49:13 crc kubenswrapper[4810]: I0219 16:49:13.332702 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtwq9" event={"ID":"7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1","Type":"ContainerDied","Data":"3875e687f2e4bee41b307a420054102f3b0c17cfc118c7ff51e926bdf160d8f6"} Feb 19 16:49:13 crc kubenswrapper[4810]: I0219 16:49:13.332766 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtwq9" event={"ID":"7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1","Type":"ContainerStarted","Data":"e4ff98de98e21960026f1b09a68015f3b8c43e525f6b3c17a4c5640b9e54426f"} Feb 19 16:49:13 crc kubenswrapper[4810]: I0219 16:49:13.335635 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 16:49:14 crc kubenswrapper[4810]: I0219 16:49:14.439948 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:49:14 crc kubenswrapper[4810]: E0219 16:49:14.440300 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:49:18 crc kubenswrapper[4810]: I0219 16:49:18.408010 4810 generic.go:334] "Generic (PLEG): container finished" podID="7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1" containerID="f7b8c338c99efffac00e60cf312b5fdc213c42aa45d098f512cfebbd165281ab" exitCode=0 Feb 19 16:49:18 crc kubenswrapper[4810]: I0219 16:49:18.408656 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtwq9" event={"ID":"7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1","Type":"ContainerDied","Data":"f7b8c338c99efffac00e60cf312b5fdc213c42aa45d098f512cfebbd165281ab"} Feb 19 16:49:19 crc kubenswrapper[4810]: I0219 16:49:19.422185 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtwq9" event={"ID":"7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1","Type":"ContainerStarted","Data":"56a97f096bb3adbb5160b221a59b41089911f3a603720ba682acc8f52f5df050"} Feb 19 16:49:19 crc kubenswrapper[4810]: I0219 16:49:19.454419 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wtwq9" podStartSLOduration=1.94934166 podStartE2EDuration="7.454395566s" podCreationTimestamp="2026-02-19 16:49:12 +0000 UTC" firstStartedPulling="2026-02-19 16:49:13.335147406 +0000 UTC m=+5982.817177560" lastFinishedPulling="2026-02-19 16:49:18.840201342 +0000 UTC m=+5988.322231466" observedRunningTime="2026-02-19 16:49:19.449352482 +0000 UTC m=+5988.931382646" watchObservedRunningTime="2026-02-19 16:49:19.454395566 +0000 UTC m=+5988.936425700" Feb 19 16:49:22 crc kubenswrapper[4810]: I0219 16:49:22.432370 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:22 crc kubenswrapper[4810]: I0219 16:49:22.433038 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:22 crc kubenswrapper[4810]: I0219 16:49:22.505689 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:28 crc kubenswrapper[4810]: I0219 16:49:28.439720 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:49:28 crc kubenswrapper[4810]: E0219 16:49:28.440595 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:49:32 crc kubenswrapper[4810]: I0219 16:49:32.528662 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wtwq9" Feb 19 16:49:32 crc kubenswrapper[4810]: I0219 16:49:32.628956 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wtwq9"] Feb 19 16:49:32 crc kubenswrapper[4810]: I0219 16:49:32.671656 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-78p9d"] Feb 19 16:49:32 crc kubenswrapper[4810]: I0219 16:49:32.671999 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-78p9d" podUID="4070bcdc-bd83-4c82-920b-8cd10671c498" containerName="registry-server" containerID="cri-o://2d183229f511b7d5a8cb0fb396e7514676855b732318632722a576d32921f68f" gracePeriod=2 Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.132097 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.243955 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4070bcdc-bd83-4c82-920b-8cd10671c498-utilities\") pod \"4070bcdc-bd83-4c82-920b-8cd10671c498\" (UID: \"4070bcdc-bd83-4c82-920b-8cd10671c498\") " Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.244030 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88slz\" (UniqueName: \"kubernetes.io/projected/4070bcdc-bd83-4c82-920b-8cd10671c498-kube-api-access-88slz\") pod \"4070bcdc-bd83-4c82-920b-8cd10671c498\" (UID: \"4070bcdc-bd83-4c82-920b-8cd10671c498\") " Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.244105 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4070bcdc-bd83-4c82-920b-8cd10671c498-catalog-content\") pod \"4070bcdc-bd83-4c82-920b-8cd10671c498\" (UID: \"4070bcdc-bd83-4c82-920b-8cd10671c498\") " Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.244582 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4070bcdc-bd83-4c82-920b-8cd10671c498-utilities" (OuterVolumeSpecName: "utilities") pod "4070bcdc-bd83-4c82-920b-8cd10671c498" (UID: "4070bcdc-bd83-4c82-920b-8cd10671c498"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.250426 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4070bcdc-bd83-4c82-920b-8cd10671c498-kube-api-access-88slz" (OuterVolumeSpecName: "kube-api-access-88slz") pod "4070bcdc-bd83-4c82-920b-8cd10671c498" (UID: "4070bcdc-bd83-4c82-920b-8cd10671c498"). InnerVolumeSpecName "kube-api-access-88slz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.300538 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4070bcdc-bd83-4c82-920b-8cd10671c498-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4070bcdc-bd83-4c82-920b-8cd10671c498" (UID: "4070bcdc-bd83-4c82-920b-8cd10671c498"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.347184 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4070bcdc-bd83-4c82-920b-8cd10671c498-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.347234 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4070bcdc-bd83-4c82-920b-8cd10671c498-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.347249 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88slz\" (UniqueName: \"kubernetes.io/projected/4070bcdc-bd83-4c82-920b-8cd10671c498-kube-api-access-88slz\") on node \"crc\" DevicePath \"\"" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.602492 4810 generic.go:334] "Generic (PLEG): container finished" podID="4070bcdc-bd83-4c82-920b-8cd10671c498" containerID="2d183229f511b7d5a8cb0fb396e7514676855b732318632722a576d32921f68f" exitCode=0 Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.602552 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78p9d" event={"ID":"4070bcdc-bd83-4c82-920b-8cd10671c498","Type":"ContainerDied","Data":"2d183229f511b7d5a8cb0fb396e7514676855b732318632722a576d32921f68f"} Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.602589 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78p9d" event={"ID":"4070bcdc-bd83-4c82-920b-8cd10671c498","Type":"ContainerDied","Data":"0c95ccc80153fc2aa4312b67f7d6c9a87f9473946f133fded513db482a269be4"} Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.602590 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78p9d" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.602611 4810 scope.go:117] "RemoveContainer" containerID="2d183229f511b7d5a8cb0fb396e7514676855b732318632722a576d32921f68f" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.624068 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-78p9d"] Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.626836 4810 scope.go:117] "RemoveContainer" containerID="702b9f8e53344f3c21f1cd920ff8a37a750c3a8370aa348e1d772d912bfa2ac0" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.632949 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-78p9d"] Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.652336 4810 scope.go:117] "RemoveContainer" containerID="ab74f7f85dc91d920ea16df4b6ea4adb1452fa6de4e5f98251385b261184b652" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.719677 4810 scope.go:117] "RemoveContainer" containerID="2d183229f511b7d5a8cb0fb396e7514676855b732318632722a576d32921f68f" Feb 19 16:49:33 crc kubenswrapper[4810]: E0219 16:49:33.720164 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d183229f511b7d5a8cb0fb396e7514676855b732318632722a576d32921f68f\": container with ID starting with 2d183229f511b7d5a8cb0fb396e7514676855b732318632722a576d32921f68f not found: ID does not exist" containerID="2d183229f511b7d5a8cb0fb396e7514676855b732318632722a576d32921f68f" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.720206 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d183229f511b7d5a8cb0fb396e7514676855b732318632722a576d32921f68f"} err="failed to get container status \"2d183229f511b7d5a8cb0fb396e7514676855b732318632722a576d32921f68f\": rpc error: code = NotFound desc = could not find container \"2d183229f511b7d5a8cb0fb396e7514676855b732318632722a576d32921f68f\": container with ID starting with 2d183229f511b7d5a8cb0fb396e7514676855b732318632722a576d32921f68f not found: ID does not exist" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.720231 4810 scope.go:117] "RemoveContainer" containerID="702b9f8e53344f3c21f1cd920ff8a37a750c3a8370aa348e1d772d912bfa2ac0" Feb 19 16:49:33 crc kubenswrapper[4810]: E0219 16:49:33.720678 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"702b9f8e53344f3c21f1cd920ff8a37a750c3a8370aa348e1d772d912bfa2ac0\": container with ID starting with 702b9f8e53344f3c21f1cd920ff8a37a750c3a8370aa348e1d772d912bfa2ac0 not found: ID does not exist" containerID="702b9f8e53344f3c21f1cd920ff8a37a750c3a8370aa348e1d772d912bfa2ac0" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.720705 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702b9f8e53344f3c21f1cd920ff8a37a750c3a8370aa348e1d772d912bfa2ac0"} err="failed to get container status \"702b9f8e53344f3c21f1cd920ff8a37a750c3a8370aa348e1d772d912bfa2ac0\": rpc error: code = NotFound desc = could not find container \"702b9f8e53344f3c21f1cd920ff8a37a750c3a8370aa348e1d772d912bfa2ac0\": container with ID starting with 702b9f8e53344f3c21f1cd920ff8a37a750c3a8370aa348e1d772d912bfa2ac0 not found: ID does not exist" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.720728 4810 scope.go:117] "RemoveContainer" containerID="ab74f7f85dc91d920ea16df4b6ea4adb1452fa6de4e5f98251385b261184b652" Feb 19 16:49:33 crc kubenswrapper[4810]: E0219 16:49:33.721028 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab74f7f85dc91d920ea16df4b6ea4adb1452fa6de4e5f98251385b261184b652\": container with ID starting with ab74f7f85dc91d920ea16df4b6ea4adb1452fa6de4e5f98251385b261184b652 not found: ID does not exist" containerID="ab74f7f85dc91d920ea16df4b6ea4adb1452fa6de4e5f98251385b261184b652" Feb 19 16:49:33 crc kubenswrapper[4810]: I0219 16:49:33.721086 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab74f7f85dc91d920ea16df4b6ea4adb1452fa6de4e5f98251385b261184b652"} err="failed to get container status \"ab74f7f85dc91d920ea16df4b6ea4adb1452fa6de4e5f98251385b261184b652\": rpc error: code = NotFound desc = could not find container \"ab74f7f85dc91d920ea16df4b6ea4adb1452fa6de4e5f98251385b261184b652\": container with ID starting with ab74f7f85dc91d920ea16df4b6ea4adb1452fa6de4e5f98251385b261184b652 not found: ID does not exist" Feb 19 16:49:35 crc kubenswrapper[4810]: I0219 16:49:35.455732 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4070bcdc-bd83-4c82-920b-8cd10671c498" path="/var/lib/kubelet/pods/4070bcdc-bd83-4c82-920b-8cd10671c498/volumes" Feb 19 16:49:42 crc kubenswrapper[4810]: I0219 16:49:42.441902 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:49:42 crc kubenswrapper[4810]: E0219 16:49:42.443687 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:49:45 crc kubenswrapper[4810]: I0219 16:49:45.204919 4810 scope.go:117] "RemoveContainer" containerID="61e6a470548c19ba5e113ede1e2ed5b363e40d072f7a28ac5b76e32667d06ba8" Feb 19 16:49:45 crc kubenswrapper[4810]: I0219 16:49:45.236319 4810 scope.go:117] "RemoveContainer" containerID="17449341bfb55cd42e3f6347fa4879ccfdfca14899e053d20de9d0995b28213d" Feb 19 16:49:45 crc kubenswrapper[4810]: I0219 16:49:45.342022 4810 scope.go:117] "RemoveContainer" containerID="8eab70b763bfc660b31c0a60ab6d66eddd4b83e4bf6e5028c9b6b68f68641100" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.273726 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6xmxg"] Feb 19 16:49:56 crc kubenswrapper[4810]: E0219 16:49:56.275202 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4070bcdc-bd83-4c82-920b-8cd10671c498" containerName="extract-content" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.275224 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4070bcdc-bd83-4c82-920b-8cd10671c498" containerName="extract-content" Feb 19 16:49:56 crc kubenswrapper[4810]: E0219 16:49:56.275272 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4070bcdc-bd83-4c82-920b-8cd10671c498" containerName="registry-server" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.275284 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4070bcdc-bd83-4c82-920b-8cd10671c498" containerName="registry-server" Feb 19 16:49:56 crc kubenswrapper[4810]: E0219 16:49:56.275319 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4070bcdc-bd83-4c82-920b-8cd10671c498" containerName="extract-utilities" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.275354 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4070bcdc-bd83-4c82-920b-8cd10671c498" containerName="extract-utilities" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.275711 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="4070bcdc-bd83-4c82-920b-8cd10671c498" containerName="registry-server" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.278541 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.296536 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xmxg"] Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.328352 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz44j\" (UniqueName: \"kubernetes.io/projected/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-kube-api-access-lz44j\") pod \"redhat-operators-6xmxg\" (UID: \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\") " pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.328426 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-catalog-content\") pod \"redhat-operators-6xmxg\" (UID: \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\") " pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.328575 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-utilities\") pod \"redhat-operators-6xmxg\" (UID: \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\") " pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.431403 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-utilities\") pod \"redhat-operators-6xmxg\" (UID: \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\") " pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.432024 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-utilities\") pod \"redhat-operators-6xmxg\" (UID: \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\") " pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.432222 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz44j\" (UniqueName: \"kubernetes.io/projected/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-kube-api-access-lz44j\") pod \"redhat-operators-6xmxg\" (UID: \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\") " pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.432463 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-catalog-content\") pod \"redhat-operators-6xmxg\" (UID: \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\") " pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.432846 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-catalog-content\") pod \"redhat-operators-6xmxg\" (UID: \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\") " pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.463987 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz44j\" (UniqueName: \"kubernetes.io/projected/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-kube-api-access-lz44j\") pod \"redhat-operators-6xmxg\" (UID: \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\") " pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:49:56 crc kubenswrapper[4810]: I0219 16:49:56.618624 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:49:57 crc kubenswrapper[4810]: I0219 16:49:57.198458 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xmxg"] Feb 19 16:49:57 crc kubenswrapper[4810]: I0219 16:49:57.439362 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:49:57 crc kubenswrapper[4810]: E0219 16:49:57.439953 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:49:57 crc kubenswrapper[4810]: I0219 16:49:57.929607 4810 generic.go:334] "Generic (PLEG): container finished" podID="e82d7377-faf1-409d-bc2e-581cbdcd1f6d" containerID="af6b1b41bd819502addcb6561dbf8e4c1088ea0e95e1b6e7bc74408aef9ce7ec" exitCode=0 Feb 19 16:49:57 crc kubenswrapper[4810]: I0219 16:49:57.929668 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xmxg" event={"ID":"e82d7377-faf1-409d-bc2e-581cbdcd1f6d","Type":"ContainerDied","Data":"af6b1b41bd819502addcb6561dbf8e4c1088ea0e95e1b6e7bc74408aef9ce7ec"} Feb 19 16:49:57 crc kubenswrapper[4810]: I0219 16:49:57.929712 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xmxg" event={"ID":"e82d7377-faf1-409d-bc2e-581cbdcd1f6d","Type":"ContainerStarted","Data":"e208e8dc67c8ea6085bc7423038bf7a34783c4f8f871e552d86a48225914d4f3"} Feb 19 16:49:59 crc kubenswrapper[4810]: I0219 16:49:59.958534 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xmxg" event={"ID":"e82d7377-faf1-409d-bc2e-581cbdcd1f6d","Type":"ContainerStarted","Data":"583d135e5a359966cee557b1b2d0e1e575c864596a3a4658d92283edd29d9c07"} Feb 19 16:50:02 crc kubenswrapper[4810]: I0219 16:50:02.993896 4810 generic.go:334] "Generic (PLEG): container finished" podID="e82d7377-faf1-409d-bc2e-581cbdcd1f6d" containerID="583d135e5a359966cee557b1b2d0e1e575c864596a3a4658d92283edd29d9c07" exitCode=0 Feb 19 16:50:02 crc kubenswrapper[4810]: I0219 16:50:02.993979 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xmxg" event={"ID":"e82d7377-faf1-409d-bc2e-581cbdcd1f6d","Type":"ContainerDied","Data":"583d135e5a359966cee557b1b2d0e1e575c864596a3a4658d92283edd29d9c07"} Feb 19 16:50:04 crc kubenswrapper[4810]: I0219 16:50:04.006441 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xmxg" event={"ID":"e82d7377-faf1-409d-bc2e-581cbdcd1f6d","Type":"ContainerStarted","Data":"30f6fb85d62122e3a21c79cc5e4b78d9d36ebef448e6e43c507f3bfef300031e"} Feb 19 16:50:04 crc kubenswrapper[4810]: I0219 16:50:04.038042 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6xmxg" podStartSLOduration=2.27834756 podStartE2EDuration="8.038024961s" podCreationTimestamp="2026-02-19 16:49:56 +0000 UTC" firstStartedPulling="2026-02-19 16:49:57.932340337 +0000 UTC m=+6027.414370461" lastFinishedPulling="2026-02-19 16:50:03.692017708 +0000 UTC m=+6033.174047862" observedRunningTime="2026-02-19 16:50:04.032899034 +0000 UTC m=+6033.514929158" watchObservedRunningTime="2026-02-19 16:50:04.038024961 +0000 UTC m=+6033.520055085" Feb 19 16:50:06 crc kubenswrapper[4810]: I0219 16:50:06.618965 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:50:06 crc kubenswrapper[4810]: I0219 16:50:06.619676 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:50:07 crc kubenswrapper[4810]: I0219 16:50:07.694945 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xmxg" podUID="e82d7377-faf1-409d-bc2e-581cbdcd1f6d" containerName="registry-server" probeResult="failure" output=< Feb 19 16:50:07 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 16:50:07 crc kubenswrapper[4810]: > Feb 19 16:50:09 crc kubenswrapper[4810]: I0219 16:50:09.440118 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:50:09 crc kubenswrapper[4810]: E0219 16:50:09.440456 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:50:16 crc kubenswrapper[4810]: I0219 16:50:16.713710 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:50:16 crc kubenswrapper[4810]: I0219 16:50:16.801820 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:50:16 crc kubenswrapper[4810]: I0219 16:50:16.976157 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xmxg"] Feb 19 16:50:18 crc kubenswrapper[4810]: I0219 16:50:18.176391 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6xmxg" podUID="e82d7377-faf1-409d-bc2e-581cbdcd1f6d" containerName="registry-server" containerID="cri-o://30f6fb85d62122e3a21c79cc5e4b78d9d36ebef448e6e43c507f3bfef300031e" gracePeriod=2 Feb 19 16:50:18 crc kubenswrapper[4810]: I0219 16:50:18.703897 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:50:18 crc kubenswrapper[4810]: I0219 16:50:18.768276 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-utilities\") pod \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\" (UID: \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\") " Feb 19 16:50:18 crc kubenswrapper[4810]: I0219 16:50:18.768920 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-catalog-content\") pod \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\" (UID: \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\") " Feb 19 16:50:18 crc kubenswrapper[4810]: I0219 16:50:18.769412 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz44j\" (UniqueName: \"kubernetes.io/projected/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-kube-api-access-lz44j\") pod \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\" (UID: \"e82d7377-faf1-409d-bc2e-581cbdcd1f6d\") " Feb 19 16:50:18 crc kubenswrapper[4810]: I0219 16:50:18.769746 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-utilities" (OuterVolumeSpecName: "utilities") pod "e82d7377-faf1-409d-bc2e-581cbdcd1f6d" (UID: "e82d7377-faf1-409d-bc2e-581cbdcd1f6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:50:18 crc kubenswrapper[4810]: I0219 16:50:18.771423 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:50:18 crc kubenswrapper[4810]: I0219 16:50:18.775863 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-kube-api-access-lz44j" (OuterVolumeSpecName: "kube-api-access-lz44j") pod "e82d7377-faf1-409d-bc2e-581cbdcd1f6d" (UID: "e82d7377-faf1-409d-bc2e-581cbdcd1f6d"). InnerVolumeSpecName "kube-api-access-lz44j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:50:18 crc kubenswrapper[4810]: I0219 16:50:18.873681 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz44j\" (UniqueName: \"kubernetes.io/projected/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-kube-api-access-lz44j\") on node \"crc\" DevicePath \"\"" Feb 19 16:50:18 crc kubenswrapper[4810]: I0219 16:50:18.916655 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e82d7377-faf1-409d-bc2e-581cbdcd1f6d" (UID: "e82d7377-faf1-409d-bc2e-581cbdcd1f6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:50:18 crc kubenswrapper[4810]: I0219 16:50:18.975408 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e82d7377-faf1-409d-bc2e-581cbdcd1f6d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.193768 4810 generic.go:334] "Generic (PLEG): container finished" podID="e82d7377-faf1-409d-bc2e-581cbdcd1f6d" containerID="30f6fb85d62122e3a21c79cc5e4b78d9d36ebef448e6e43c507f3bfef300031e" exitCode=0 Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.193839 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xmxg" event={"ID":"e82d7377-faf1-409d-bc2e-581cbdcd1f6d","Type":"ContainerDied","Data":"30f6fb85d62122e3a21c79cc5e4b78d9d36ebef448e6e43c507f3bfef300031e"} Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.193883 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xmxg" Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.194320 4810 scope.go:117] "RemoveContainer" containerID="30f6fb85d62122e3a21c79cc5e4b78d9d36ebef448e6e43c507f3bfef300031e" Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.194298 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xmxg" event={"ID":"e82d7377-faf1-409d-bc2e-581cbdcd1f6d","Type":"ContainerDied","Data":"e208e8dc67c8ea6085bc7423038bf7a34783c4f8f871e552d86a48225914d4f3"} Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.229467 4810 scope.go:117] "RemoveContainer" containerID="583d135e5a359966cee557b1b2d0e1e575c864596a3a4658d92283edd29d9c07" Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.271805 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xmxg"] Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.278522 4810 scope.go:117] "RemoveContainer" containerID="af6b1b41bd819502addcb6561dbf8e4c1088ea0e95e1b6e7bc74408aef9ce7ec" Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.289298 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6xmxg"] Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.337046 4810 scope.go:117] "RemoveContainer" containerID="30f6fb85d62122e3a21c79cc5e4b78d9d36ebef448e6e43c507f3bfef300031e" Feb 19 16:50:19 crc kubenswrapper[4810]: E0219 16:50:19.337720 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30f6fb85d62122e3a21c79cc5e4b78d9d36ebef448e6e43c507f3bfef300031e\": container with ID starting with 30f6fb85d62122e3a21c79cc5e4b78d9d36ebef448e6e43c507f3bfef300031e not found: ID does not exist" containerID="30f6fb85d62122e3a21c79cc5e4b78d9d36ebef448e6e43c507f3bfef300031e" Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.337781 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30f6fb85d62122e3a21c79cc5e4b78d9d36ebef448e6e43c507f3bfef300031e"} err="failed to get container status \"30f6fb85d62122e3a21c79cc5e4b78d9d36ebef448e6e43c507f3bfef300031e\": rpc error: code = NotFound desc = could not find container \"30f6fb85d62122e3a21c79cc5e4b78d9d36ebef448e6e43c507f3bfef300031e\": container with ID starting with 30f6fb85d62122e3a21c79cc5e4b78d9d36ebef448e6e43c507f3bfef300031e not found: ID does not exist" Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.337815 4810 scope.go:117] "RemoveContainer" containerID="583d135e5a359966cee557b1b2d0e1e575c864596a3a4658d92283edd29d9c07" Feb 19 16:50:19 crc kubenswrapper[4810]: E0219 16:50:19.339979 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"583d135e5a359966cee557b1b2d0e1e575c864596a3a4658d92283edd29d9c07\": container with ID starting with 583d135e5a359966cee557b1b2d0e1e575c864596a3a4658d92283edd29d9c07 not found: ID does not exist" containerID="583d135e5a359966cee557b1b2d0e1e575c864596a3a4658d92283edd29d9c07" Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.340020 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"583d135e5a359966cee557b1b2d0e1e575c864596a3a4658d92283edd29d9c07"} err="failed to get container status \"583d135e5a359966cee557b1b2d0e1e575c864596a3a4658d92283edd29d9c07\": rpc error: code = NotFound desc = could not find container \"583d135e5a359966cee557b1b2d0e1e575c864596a3a4658d92283edd29d9c07\": container with ID starting with 583d135e5a359966cee557b1b2d0e1e575c864596a3a4658d92283edd29d9c07 not found: ID does not exist" Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.340069 4810 scope.go:117] "RemoveContainer" containerID="af6b1b41bd819502addcb6561dbf8e4c1088ea0e95e1b6e7bc74408aef9ce7ec" Feb 19 16:50:19 crc kubenswrapper[4810]: E0219 16:50:19.340701 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af6b1b41bd819502addcb6561dbf8e4c1088ea0e95e1b6e7bc74408aef9ce7ec\": container with ID starting with af6b1b41bd819502addcb6561dbf8e4c1088ea0e95e1b6e7bc74408aef9ce7ec not found: ID does not exist" containerID="af6b1b41bd819502addcb6561dbf8e4c1088ea0e95e1b6e7bc74408aef9ce7ec" Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.340785 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6b1b41bd819502addcb6561dbf8e4c1088ea0e95e1b6e7bc74408aef9ce7ec"} err="failed to get container status \"af6b1b41bd819502addcb6561dbf8e4c1088ea0e95e1b6e7bc74408aef9ce7ec\": rpc error: code = NotFound desc = could not find container \"af6b1b41bd819502addcb6561dbf8e4c1088ea0e95e1b6e7bc74408aef9ce7ec\": container with ID starting with af6b1b41bd819502addcb6561dbf8e4c1088ea0e95e1b6e7bc74408aef9ce7ec not found: ID does not exist" Feb 19 16:50:19 crc kubenswrapper[4810]: I0219 16:50:19.457249 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e82d7377-faf1-409d-bc2e-581cbdcd1f6d" path="/var/lib/kubelet/pods/e82d7377-faf1-409d-bc2e-581cbdcd1f6d/volumes" Feb 19 16:50:20 crc kubenswrapper[4810]: I0219 16:50:20.443921 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:50:21 crc kubenswrapper[4810]: I0219 16:50:21.233390 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"30b913c55f9740186a8530347003d7b1c641faf95ecc5adefecfaffe54fb5ed2"} Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.306192 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-944q7/must-gather-452lm"] Feb 19 16:52:26 crc kubenswrapper[4810]: E0219 16:52:26.307132 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e82d7377-faf1-409d-bc2e-581cbdcd1f6d" containerName="extract-content" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.307147 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e82d7377-faf1-409d-bc2e-581cbdcd1f6d" containerName="extract-content" Feb 19 16:52:26 crc kubenswrapper[4810]: E0219 16:52:26.307178 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e82d7377-faf1-409d-bc2e-581cbdcd1f6d" containerName="registry-server" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.307189 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e82d7377-faf1-409d-bc2e-581cbdcd1f6d" containerName="registry-server" Feb 19 16:52:26 crc kubenswrapper[4810]: E0219 16:52:26.307204 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e82d7377-faf1-409d-bc2e-581cbdcd1f6d" containerName="extract-utilities" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.307212 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e82d7377-faf1-409d-bc2e-581cbdcd1f6d" containerName="extract-utilities" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.307466 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e82d7377-faf1-409d-bc2e-581cbdcd1f6d" containerName="registry-server" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.308813 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/must-gather-452lm" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.329422 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-944q7"/"default-dockercfg-4gktv" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.329725 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-944q7"/"openshift-service-ca.crt" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.330088 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-944q7"/"kube-root-ca.crt" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.385744 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-944q7/must-gather-452lm"] Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.410564 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6rmt\" (UniqueName: \"kubernetes.io/projected/0adbd447-568e-48c8-ab76-3d2f20e3f315-kube-api-access-f6rmt\") pod \"must-gather-452lm\" (UID: \"0adbd447-568e-48c8-ab76-3d2f20e3f315\") " pod="openshift-must-gather-944q7/must-gather-452lm" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.410703 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0adbd447-568e-48c8-ab76-3d2f20e3f315-must-gather-output\") pod \"must-gather-452lm\" (UID: \"0adbd447-568e-48c8-ab76-3d2f20e3f315\") " pod="openshift-must-gather-944q7/must-gather-452lm" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.512822 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0adbd447-568e-48c8-ab76-3d2f20e3f315-must-gather-output\") pod \"must-gather-452lm\" (UID: \"0adbd447-568e-48c8-ab76-3d2f20e3f315\") " pod="openshift-must-gather-944q7/must-gather-452lm" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.512966 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6rmt\" (UniqueName: \"kubernetes.io/projected/0adbd447-568e-48c8-ab76-3d2f20e3f315-kube-api-access-f6rmt\") pod \"must-gather-452lm\" (UID: \"0adbd447-568e-48c8-ab76-3d2f20e3f315\") " pod="openshift-must-gather-944q7/must-gather-452lm" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.513473 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0adbd447-568e-48c8-ab76-3d2f20e3f315-must-gather-output\") pod \"must-gather-452lm\" (UID: \"0adbd447-568e-48c8-ab76-3d2f20e3f315\") " pod="openshift-must-gather-944q7/must-gather-452lm" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.532182 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6rmt\" (UniqueName: \"kubernetes.io/projected/0adbd447-568e-48c8-ab76-3d2f20e3f315-kube-api-access-f6rmt\") pod \"must-gather-452lm\" (UID: \"0adbd447-568e-48c8-ab76-3d2f20e3f315\") " pod="openshift-must-gather-944q7/must-gather-452lm" Feb 19 16:52:26 crc kubenswrapper[4810]: I0219 16:52:26.674427 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/must-gather-452lm" Feb 19 16:52:27 crc kubenswrapper[4810]: I0219 16:52:27.191851 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-944q7/must-gather-452lm"] Feb 19 16:52:27 crc kubenswrapper[4810]: I0219 16:52:27.778126 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-944q7/must-gather-452lm" event={"ID":"0adbd447-568e-48c8-ab76-3d2f20e3f315","Type":"ContainerStarted","Data":"35e06e2cca1fb991fa940df76a9f88f0b3d758223a060db89405e9f9e28e0bdb"} Feb 19 16:52:27 crc kubenswrapper[4810]: I0219 16:52:27.778497 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-944q7/must-gather-452lm" event={"ID":"0adbd447-568e-48c8-ab76-3d2f20e3f315","Type":"ContainerStarted","Data":"07153056d19231ca3d7ab4d1dd2cd03f17f8e8578ea20e8a31486c974d0f0b3d"} Feb 19 16:52:27 crc kubenswrapper[4810]: I0219 16:52:27.778512 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-944q7/must-gather-452lm" event={"ID":"0adbd447-568e-48c8-ab76-3d2f20e3f315","Type":"ContainerStarted","Data":"cf36fcfbf554bf4a4c84825c8d6c3208469e142f948d23106d4b61d616d08799"} Feb 19 16:52:27 crc kubenswrapper[4810]: I0219 16:52:27.803256 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-944q7/must-gather-452lm" podStartSLOduration=1.80323674 podStartE2EDuration="1.80323674s" podCreationTimestamp="2026-02-19 16:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 16:52:27.795921209 +0000 UTC m=+6177.277951333" watchObservedRunningTime="2026-02-19 16:52:27.80323674 +0000 UTC m=+6177.285266854" Feb 19 16:52:31 crc kubenswrapper[4810]: I0219 16:52:31.227587 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-944q7/crc-debug-98k9d"] Feb 19 16:52:31 crc kubenswrapper[4810]: I0219 16:52:31.229540 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/crc-debug-98k9d" Feb 19 16:52:31 crc kubenswrapper[4810]: I0219 16:52:31.329556 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e7afe8a-09e9-464c-b8ba-ed36963c58af-host\") pod \"crc-debug-98k9d\" (UID: \"7e7afe8a-09e9-464c-b8ba-ed36963c58af\") " pod="openshift-must-gather-944q7/crc-debug-98k9d" Feb 19 16:52:31 crc kubenswrapper[4810]: I0219 16:52:31.329646 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbj5v\" (UniqueName: \"kubernetes.io/projected/7e7afe8a-09e9-464c-b8ba-ed36963c58af-kube-api-access-fbj5v\") pod \"crc-debug-98k9d\" (UID: \"7e7afe8a-09e9-464c-b8ba-ed36963c58af\") " pod="openshift-must-gather-944q7/crc-debug-98k9d" Feb 19 16:52:31 crc kubenswrapper[4810]: I0219 16:52:31.431912 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e7afe8a-09e9-464c-b8ba-ed36963c58af-host\") pod \"crc-debug-98k9d\" (UID: \"7e7afe8a-09e9-464c-b8ba-ed36963c58af\") " pod="openshift-must-gather-944q7/crc-debug-98k9d" Feb 19 16:52:31 crc kubenswrapper[4810]: I0219 16:52:31.432268 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbj5v\" (UniqueName: \"kubernetes.io/projected/7e7afe8a-09e9-464c-b8ba-ed36963c58af-kube-api-access-fbj5v\") pod \"crc-debug-98k9d\" (UID: \"7e7afe8a-09e9-464c-b8ba-ed36963c58af\") " pod="openshift-must-gather-944q7/crc-debug-98k9d" Feb 19 16:52:31 crc kubenswrapper[4810]: I0219 16:52:31.432103 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e7afe8a-09e9-464c-b8ba-ed36963c58af-host\") pod \"crc-debug-98k9d\" (UID: \"7e7afe8a-09e9-464c-b8ba-ed36963c58af\") " pod="openshift-must-gather-944q7/crc-debug-98k9d" Feb 19 16:52:31 crc kubenswrapper[4810]: I0219 16:52:31.461106 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbj5v\" (UniqueName: \"kubernetes.io/projected/7e7afe8a-09e9-464c-b8ba-ed36963c58af-kube-api-access-fbj5v\") pod \"crc-debug-98k9d\" (UID: \"7e7afe8a-09e9-464c-b8ba-ed36963c58af\") " pod="openshift-must-gather-944q7/crc-debug-98k9d" Feb 19 16:52:31 crc kubenswrapper[4810]: I0219 16:52:31.559660 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/crc-debug-98k9d" Feb 19 16:52:31 crc kubenswrapper[4810]: I0219 16:52:31.828065 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-944q7/crc-debug-98k9d" event={"ID":"7e7afe8a-09e9-464c-b8ba-ed36963c58af","Type":"ContainerStarted","Data":"3fcdfa3b9cf34af832a3d571b41bf6ce1daea3be21eb9d89797e41fab2452dae"} Feb 19 16:52:32 crc kubenswrapper[4810]: I0219 16:52:32.839441 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-944q7/crc-debug-98k9d" event={"ID":"7e7afe8a-09e9-464c-b8ba-ed36963c58af","Type":"ContainerStarted","Data":"1827e095c322c4555855b6ea50a05730da42c19ada929bb9656b95a872f9917c"} Feb 19 16:52:32 crc kubenswrapper[4810]: I0219 16:52:32.853374 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-944q7/crc-debug-98k9d" podStartSLOduration=1.853352103 podStartE2EDuration="1.853352103s" podCreationTimestamp="2026-02-19 16:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 16:52:32.852092762 +0000 UTC m=+6182.334122876" watchObservedRunningTime="2026-02-19 16:52:32.853352103 +0000 UTC m=+6182.335382237" Feb 19 16:52:49 crc kubenswrapper[4810]: I0219 16:52:49.537293 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:52:49 crc kubenswrapper[4810]: I0219 16:52:49.537747 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:53:13 crc kubenswrapper[4810]: I0219 16:53:13.248081 4810 generic.go:334] "Generic (PLEG): container finished" podID="7e7afe8a-09e9-464c-b8ba-ed36963c58af" containerID="1827e095c322c4555855b6ea50a05730da42c19ada929bb9656b95a872f9917c" exitCode=0 Feb 19 16:53:13 crc kubenswrapper[4810]: I0219 16:53:13.248168 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-944q7/crc-debug-98k9d" event={"ID":"7e7afe8a-09e9-464c-b8ba-ed36963c58af","Type":"ContainerDied","Data":"1827e095c322c4555855b6ea50a05730da42c19ada929bb9656b95a872f9917c"} Feb 19 16:53:14 crc kubenswrapper[4810]: I0219 16:53:14.377896 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/crc-debug-98k9d" Feb 19 16:53:14 crc kubenswrapper[4810]: I0219 16:53:14.414394 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-944q7/crc-debug-98k9d"] Feb 19 16:53:14 crc kubenswrapper[4810]: I0219 16:53:14.424282 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-944q7/crc-debug-98k9d"] Feb 19 16:53:14 crc kubenswrapper[4810]: I0219 16:53:14.446552 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e7afe8a-09e9-464c-b8ba-ed36963c58af-host\") pod \"7e7afe8a-09e9-464c-b8ba-ed36963c58af\" (UID: \"7e7afe8a-09e9-464c-b8ba-ed36963c58af\") " Feb 19 16:53:14 crc kubenswrapper[4810]: I0219 16:53:14.446605 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbj5v\" (UniqueName: \"kubernetes.io/projected/7e7afe8a-09e9-464c-b8ba-ed36963c58af-kube-api-access-fbj5v\") pod \"7e7afe8a-09e9-464c-b8ba-ed36963c58af\" (UID: \"7e7afe8a-09e9-464c-b8ba-ed36963c58af\") " Feb 19 16:53:14 crc kubenswrapper[4810]: I0219 16:53:14.447981 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e7afe8a-09e9-464c-b8ba-ed36963c58af-host" (OuterVolumeSpecName: "host") pod "7e7afe8a-09e9-464c-b8ba-ed36963c58af" (UID: "7e7afe8a-09e9-464c-b8ba-ed36963c58af"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 16:53:14 crc kubenswrapper[4810]: I0219 16:53:14.468573 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7afe8a-09e9-464c-b8ba-ed36963c58af-kube-api-access-fbj5v" (OuterVolumeSpecName: "kube-api-access-fbj5v") pod "7e7afe8a-09e9-464c-b8ba-ed36963c58af" (UID: "7e7afe8a-09e9-464c-b8ba-ed36963c58af"). InnerVolumeSpecName "kube-api-access-fbj5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:53:14 crc kubenswrapper[4810]: I0219 16:53:14.549469 4810 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e7afe8a-09e9-464c-b8ba-ed36963c58af-host\") on node \"crc\" DevicePath \"\"" Feb 19 16:53:14 crc kubenswrapper[4810]: I0219 16:53:14.549507 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbj5v\" (UniqueName: \"kubernetes.io/projected/7e7afe8a-09e9-464c-b8ba-ed36963c58af-kube-api-access-fbj5v\") on node \"crc\" DevicePath \"\"" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.270521 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fcdfa3b9cf34af832a3d571b41bf6ce1daea3be21eb9d89797e41fab2452dae" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.270887 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/crc-debug-98k9d" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.452845 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e7afe8a-09e9-464c-b8ba-ed36963c58af" path="/var/lib/kubelet/pods/7e7afe8a-09e9-464c-b8ba-ed36963c58af/volumes" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.624194 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-944q7/crc-debug-ck97z"] Feb 19 16:53:15 crc kubenswrapper[4810]: E0219 16:53:15.624623 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7afe8a-09e9-464c-b8ba-ed36963c58af" containerName="container-00" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.624641 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7afe8a-09e9-464c-b8ba-ed36963c58af" containerName="container-00" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.624845 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7afe8a-09e9-464c-b8ba-ed36963c58af" containerName="container-00" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.625573 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/crc-debug-ck97z" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.793076 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0809137-426c-4833-a028-76093d3d92f5-host\") pod \"crc-debug-ck97z\" (UID: \"f0809137-426c-4833-a028-76093d3d92f5\") " pod="openshift-must-gather-944q7/crc-debug-ck97z" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.793640 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72khb\" (UniqueName: \"kubernetes.io/projected/f0809137-426c-4833-a028-76093d3d92f5-kube-api-access-72khb\") pod \"crc-debug-ck97z\" (UID: \"f0809137-426c-4833-a028-76093d3d92f5\") " pod="openshift-must-gather-944q7/crc-debug-ck97z" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.895587 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0809137-426c-4833-a028-76093d3d92f5-host\") pod \"crc-debug-ck97z\" (UID: \"f0809137-426c-4833-a028-76093d3d92f5\") " pod="openshift-must-gather-944q7/crc-debug-ck97z" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.895883 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72khb\" (UniqueName: \"kubernetes.io/projected/f0809137-426c-4833-a028-76093d3d92f5-kube-api-access-72khb\") pod \"crc-debug-ck97z\" (UID: \"f0809137-426c-4833-a028-76093d3d92f5\") " pod="openshift-must-gather-944q7/crc-debug-ck97z" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.896303 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0809137-426c-4833-a028-76093d3d92f5-host\") pod \"crc-debug-ck97z\" (UID: \"f0809137-426c-4833-a028-76093d3d92f5\") " pod="openshift-must-gather-944q7/crc-debug-ck97z" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.919930 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72khb\" (UniqueName: \"kubernetes.io/projected/f0809137-426c-4833-a028-76093d3d92f5-kube-api-access-72khb\") pod \"crc-debug-ck97z\" (UID: \"f0809137-426c-4833-a028-76093d3d92f5\") " pod="openshift-must-gather-944q7/crc-debug-ck97z" Feb 19 16:53:15 crc kubenswrapper[4810]: I0219 16:53:15.942951 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/crc-debug-ck97z" Feb 19 16:53:16 crc kubenswrapper[4810]: I0219 16:53:16.282645 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-944q7/crc-debug-ck97z" event={"ID":"f0809137-426c-4833-a028-76093d3d92f5","Type":"ContainerStarted","Data":"c35e5c9c4b5ddac624677f837531b6c053c31b4d14fb41f66bcb860eccf31d9d"} Feb 19 16:53:16 crc kubenswrapper[4810]: I0219 16:53:16.282968 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-944q7/crc-debug-ck97z" event={"ID":"f0809137-426c-4833-a028-76093d3d92f5","Type":"ContainerStarted","Data":"4956638e7759c91b229eb1bf7e713d3c122f23d048d6e53e36a4b7bdbf9b5761"} Feb 19 16:53:16 crc kubenswrapper[4810]: I0219 16:53:16.300836 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-944q7/crc-debug-ck97z" podStartSLOduration=1.300807834 podStartE2EDuration="1.300807834s" podCreationTimestamp="2026-02-19 16:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 16:53:16.293967185 +0000 UTC m=+6225.775997309" watchObservedRunningTime="2026-02-19 16:53:16.300807834 +0000 UTC m=+6225.782837958" Feb 19 16:53:17 crc kubenswrapper[4810]: I0219 16:53:17.299375 4810 generic.go:334] "Generic (PLEG): container finished" podID="f0809137-426c-4833-a028-76093d3d92f5" containerID="c35e5c9c4b5ddac624677f837531b6c053c31b4d14fb41f66bcb860eccf31d9d" exitCode=0 Feb 19 16:53:17 crc kubenswrapper[4810]: I0219 16:53:17.299815 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-944q7/crc-debug-ck97z" event={"ID":"f0809137-426c-4833-a028-76093d3d92f5","Type":"ContainerDied","Data":"c35e5c9c4b5ddac624677f837531b6c053c31b4d14fb41f66bcb860eccf31d9d"} Feb 19 16:53:18 crc kubenswrapper[4810]: I0219 16:53:18.419103 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/crc-debug-ck97z" Feb 19 16:53:18 crc kubenswrapper[4810]: I0219 16:53:18.452391 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-944q7/crc-debug-ck97z"] Feb 19 16:53:18 crc kubenswrapper[4810]: I0219 16:53:18.460491 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-944q7/crc-debug-ck97z"] Feb 19 16:53:18 crc kubenswrapper[4810]: I0219 16:53:18.563901 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0809137-426c-4833-a028-76093d3d92f5-host\") pod \"f0809137-426c-4833-a028-76093d3d92f5\" (UID: \"f0809137-426c-4833-a028-76093d3d92f5\") " Feb 19 16:53:18 crc kubenswrapper[4810]: I0219 16:53:18.564156 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72khb\" (UniqueName: \"kubernetes.io/projected/f0809137-426c-4833-a028-76093d3d92f5-kube-api-access-72khb\") pod \"f0809137-426c-4833-a028-76093d3d92f5\" (UID: \"f0809137-426c-4833-a028-76093d3d92f5\") " Feb 19 16:53:18 crc kubenswrapper[4810]: I0219 16:53:18.565290 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0809137-426c-4833-a028-76093d3d92f5-host" (OuterVolumeSpecName: "host") pod "f0809137-426c-4833-a028-76093d3d92f5" (UID: "f0809137-426c-4833-a028-76093d3d92f5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 16:53:18 crc kubenswrapper[4810]: I0219 16:53:18.577542 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0809137-426c-4833-a028-76093d3d92f5-kube-api-access-72khb" (OuterVolumeSpecName: "kube-api-access-72khb") pod "f0809137-426c-4833-a028-76093d3d92f5" (UID: "f0809137-426c-4833-a028-76093d3d92f5"). InnerVolumeSpecName "kube-api-access-72khb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:53:18 crc kubenswrapper[4810]: I0219 16:53:18.666669 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72khb\" (UniqueName: \"kubernetes.io/projected/f0809137-426c-4833-a028-76093d3d92f5-kube-api-access-72khb\") on node \"crc\" DevicePath \"\"" Feb 19 16:53:18 crc kubenswrapper[4810]: I0219 16:53:18.666705 4810 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0809137-426c-4833-a028-76093d3d92f5-host\") on node \"crc\" DevicePath \"\"" Feb 19 16:53:19 crc kubenswrapper[4810]: I0219 16:53:19.318019 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4956638e7759c91b229eb1bf7e713d3c122f23d048d6e53e36a4b7bdbf9b5761" Feb 19 16:53:19 crc kubenswrapper[4810]: I0219 16:53:19.318069 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/crc-debug-ck97z" Feb 19 16:53:19 crc kubenswrapper[4810]: I0219 16:53:19.450501 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0809137-426c-4833-a028-76093d3d92f5" path="/var/lib/kubelet/pods/f0809137-426c-4833-a028-76093d3d92f5/volumes" Feb 19 16:53:19 crc kubenswrapper[4810]: I0219 16:53:19.618627 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:53:19 crc kubenswrapper[4810]: I0219 16:53:19.618674 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:53:19 crc kubenswrapper[4810]: I0219 16:53:19.799276 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-944q7/crc-debug-qjwzn"] Feb 19 16:53:19 crc kubenswrapper[4810]: E0219 16:53:19.799667 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0809137-426c-4833-a028-76093d3d92f5" containerName="container-00" Feb 19 16:53:19 crc kubenswrapper[4810]: I0219 16:53:19.799685 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0809137-426c-4833-a028-76093d3d92f5" containerName="container-00" Feb 19 16:53:19 crc kubenswrapper[4810]: I0219 16:53:19.799929 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0809137-426c-4833-a028-76093d3d92f5" containerName="container-00" Feb 19 16:53:19 crc kubenswrapper[4810]: I0219 16:53:19.800563 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/crc-debug-qjwzn" Feb 19 16:53:19 crc kubenswrapper[4810]: I0219 16:53:19.924187 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkh4m\" (UniqueName: \"kubernetes.io/projected/3aae9737-d017-4a11-8323-cd0354ba09aa-kube-api-access-dkh4m\") pod \"crc-debug-qjwzn\" (UID: \"3aae9737-d017-4a11-8323-cd0354ba09aa\") " pod="openshift-must-gather-944q7/crc-debug-qjwzn" Feb 19 16:53:19 crc kubenswrapper[4810]: I0219 16:53:19.924262 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aae9737-d017-4a11-8323-cd0354ba09aa-host\") pod \"crc-debug-qjwzn\" (UID: \"3aae9737-d017-4a11-8323-cd0354ba09aa\") " pod="openshift-must-gather-944q7/crc-debug-qjwzn" Feb 19 16:53:20 crc kubenswrapper[4810]: I0219 16:53:20.026387 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkh4m\" (UniqueName: \"kubernetes.io/projected/3aae9737-d017-4a11-8323-cd0354ba09aa-kube-api-access-dkh4m\") pod \"crc-debug-qjwzn\" (UID: \"3aae9737-d017-4a11-8323-cd0354ba09aa\") " pod="openshift-must-gather-944q7/crc-debug-qjwzn" Feb 19 16:53:20 crc kubenswrapper[4810]: I0219 16:53:20.026460 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aae9737-d017-4a11-8323-cd0354ba09aa-host\") pod \"crc-debug-qjwzn\" (UID: \"3aae9737-d017-4a11-8323-cd0354ba09aa\") " pod="openshift-must-gather-944q7/crc-debug-qjwzn" Feb 19 16:53:20 crc kubenswrapper[4810]: I0219 16:53:20.026624 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aae9737-d017-4a11-8323-cd0354ba09aa-host\") pod \"crc-debug-qjwzn\" (UID: \"3aae9737-d017-4a11-8323-cd0354ba09aa\") " pod="openshift-must-gather-944q7/crc-debug-qjwzn" Feb 19 16:53:20 crc kubenswrapper[4810]: I0219 16:53:20.070238 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkh4m\" (UniqueName: \"kubernetes.io/projected/3aae9737-d017-4a11-8323-cd0354ba09aa-kube-api-access-dkh4m\") pod \"crc-debug-qjwzn\" (UID: \"3aae9737-d017-4a11-8323-cd0354ba09aa\") " pod="openshift-must-gather-944q7/crc-debug-qjwzn" Feb 19 16:53:20 crc kubenswrapper[4810]: I0219 16:53:20.117565 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/crc-debug-qjwzn" Feb 19 16:53:20 crc kubenswrapper[4810]: W0219 16:53:20.144966 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aae9737_d017_4a11_8323_cd0354ba09aa.slice/crio-9656164cf92bf8b01c9f1e2af11c6643c46568e292fcd76132e42a9d4a87139c WatchSource:0}: Error finding container 9656164cf92bf8b01c9f1e2af11c6643c46568e292fcd76132e42a9d4a87139c: Status 404 returned error can't find the container with id 9656164cf92bf8b01c9f1e2af11c6643c46568e292fcd76132e42a9d4a87139c Feb 19 16:53:20 crc kubenswrapper[4810]: I0219 16:53:20.335069 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-944q7/crc-debug-qjwzn" event={"ID":"3aae9737-d017-4a11-8323-cd0354ba09aa","Type":"ContainerStarted","Data":"9656164cf92bf8b01c9f1e2af11c6643c46568e292fcd76132e42a9d4a87139c"} Feb 19 16:53:21 crc kubenswrapper[4810]: I0219 16:53:21.345785 4810 generic.go:334] "Generic (PLEG): container finished" podID="3aae9737-d017-4a11-8323-cd0354ba09aa" containerID="e99e76486ae25fbcf4eaf9167a8e92681978470106cc7b5ff79341a65e62afbd" exitCode=0 Feb 19 16:53:21 crc kubenswrapper[4810]: I0219 16:53:21.345856 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-944q7/crc-debug-qjwzn" event={"ID":"3aae9737-d017-4a11-8323-cd0354ba09aa","Type":"ContainerDied","Data":"e99e76486ae25fbcf4eaf9167a8e92681978470106cc7b5ff79341a65e62afbd"} Feb 19 16:53:21 crc kubenswrapper[4810]: I0219 16:53:21.394173 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-944q7/crc-debug-qjwzn"] Feb 19 16:53:21 crc kubenswrapper[4810]: I0219 16:53:21.404286 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-944q7/crc-debug-qjwzn"] Feb 19 16:53:22 crc kubenswrapper[4810]: I0219 16:53:22.477257 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/crc-debug-qjwzn" Feb 19 16:53:22 crc kubenswrapper[4810]: I0219 16:53:22.583026 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkh4m\" (UniqueName: \"kubernetes.io/projected/3aae9737-d017-4a11-8323-cd0354ba09aa-kube-api-access-dkh4m\") pod \"3aae9737-d017-4a11-8323-cd0354ba09aa\" (UID: \"3aae9737-d017-4a11-8323-cd0354ba09aa\") " Feb 19 16:53:22 crc kubenswrapper[4810]: I0219 16:53:22.583222 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aae9737-d017-4a11-8323-cd0354ba09aa-host\") pod \"3aae9737-d017-4a11-8323-cd0354ba09aa\" (UID: \"3aae9737-d017-4a11-8323-cd0354ba09aa\") " Feb 19 16:53:22 crc kubenswrapper[4810]: I0219 16:53:22.583591 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aae9737-d017-4a11-8323-cd0354ba09aa-host" (OuterVolumeSpecName: "host") pod "3aae9737-d017-4a11-8323-cd0354ba09aa" (UID: "3aae9737-d017-4a11-8323-cd0354ba09aa"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 16:53:22 crc kubenswrapper[4810]: I0219 16:53:22.589569 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aae9737-d017-4a11-8323-cd0354ba09aa-kube-api-access-dkh4m" (OuterVolumeSpecName: "kube-api-access-dkh4m") pod "3aae9737-d017-4a11-8323-cd0354ba09aa" (UID: "3aae9737-d017-4a11-8323-cd0354ba09aa"). InnerVolumeSpecName "kube-api-access-dkh4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:53:22 crc kubenswrapper[4810]: I0219 16:53:22.684746 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkh4m\" (UniqueName: \"kubernetes.io/projected/3aae9737-d017-4a11-8323-cd0354ba09aa-kube-api-access-dkh4m\") on node \"crc\" DevicePath \"\"" Feb 19 16:53:22 crc kubenswrapper[4810]: I0219 16:53:22.684788 4810 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aae9737-d017-4a11-8323-cd0354ba09aa-host\") on node \"crc\" DevicePath \"\"" Feb 19 16:53:23 crc kubenswrapper[4810]: I0219 16:53:23.369209 4810 scope.go:117] "RemoveContainer" containerID="e99e76486ae25fbcf4eaf9167a8e92681978470106cc7b5ff79341a65e62afbd" Feb 19 16:53:23 crc kubenswrapper[4810]: I0219 16:53:23.369260 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/crc-debug-qjwzn" Feb 19 16:53:23 crc kubenswrapper[4810]: I0219 16:53:23.455683 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aae9737-d017-4a11-8323-cd0354ba09aa" path="/var/lib/kubelet/pods/3aae9737-d017-4a11-8323-cd0354ba09aa/volumes" Feb 19 16:53:49 crc kubenswrapper[4810]: I0219 16:53:49.537839 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:53:49 crc kubenswrapper[4810]: I0219 16:53:49.538391 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:53:49 crc kubenswrapper[4810]: I0219 16:53:49.538436 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 16:53:49 crc kubenswrapper[4810]: I0219 16:53:49.539216 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30b913c55f9740186a8530347003d7b1c641faf95ecc5adefecfaffe54fb5ed2"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 16:53:49 crc kubenswrapper[4810]: I0219 16:53:49.539275 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://30b913c55f9740186a8530347003d7b1c641faf95ecc5adefecfaffe54fb5ed2" gracePeriod=600 Feb 19 16:53:49 crc kubenswrapper[4810]: I0219 16:53:49.715226 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="30b913c55f9740186a8530347003d7b1c641faf95ecc5adefecfaffe54fb5ed2" exitCode=0 Feb 19 16:53:49 crc kubenswrapper[4810]: I0219 16:53:49.715502 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"30b913c55f9740186a8530347003d7b1c641faf95ecc5adefecfaffe54fb5ed2"} Feb 19 16:53:49 crc kubenswrapper[4810]: I0219 16:53:49.715535 4810 scope.go:117] "RemoveContainer" containerID="63bd1770c325dfdb5ccd64220060f108b50a35fa6325f46acd3bb9d2e6c06e3e" Feb 19 16:53:50 crc kubenswrapper[4810]: I0219 16:53:50.725681 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b"} Feb 19 16:54:09 crc kubenswrapper[4810]: I0219 16:54:09.270289 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6b886df68b-htd57_8d391303-b5ee-4f63-8035-12f123f35e65/barbican-api/0.log" Feb 19 16:54:09 crc kubenswrapper[4810]: I0219 16:54:09.366996 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6b886df68b-htd57_8d391303-b5ee-4f63-8035-12f123f35e65/barbican-api-log/0.log" Feb 19 16:54:09 crc kubenswrapper[4810]: I0219 16:54:09.490570 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75f99f68b4-d7hj4_c008ffcd-bb96-47dd-a311-fdc58f6d8918/barbican-keystone-listener/0.log" Feb 19 16:54:09 crc kubenswrapper[4810]: I0219 16:54:09.569666 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75f99f68b4-d7hj4_c008ffcd-bb96-47dd-a311-fdc58f6d8918/barbican-keystone-listener-log/0.log" Feb 19 16:54:09 crc kubenswrapper[4810]: I0219 16:54:09.737409 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58f8775989-n9rgr_f277c31b-ff97-4f3b-aec3-c5cfe9293d60/barbican-worker/0.log" Feb 19 16:54:09 crc kubenswrapper[4810]: I0219 16:54:09.753131 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58f8775989-n9rgr_f277c31b-ff97-4f3b-aec3-c5cfe9293d60/barbican-worker-log/0.log" Feb 19 16:54:09 crc kubenswrapper[4810]: I0219 16:54:09.985158 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-sdzkx_c4a9ca21-e1c7-490d-8078-14407b530301/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:10 crc kubenswrapper[4810]: I0219 16:54:10.045724 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e7d91a4d-5b61-404e-a58b-cb426722f883/ceilometer-notification-agent/0.log" Feb 19 16:54:10 crc kubenswrapper[4810]: I0219 16:54:10.131860 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e7d91a4d-5b61-404e-a58b-cb426722f883/ceilometer-central-agent/0.log" Feb 19 16:54:10 crc kubenswrapper[4810]: I0219 16:54:10.254240 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e7d91a4d-5b61-404e-a58b-cb426722f883/sg-core/0.log" Feb 19 16:54:10 crc kubenswrapper[4810]: I0219 16:54:10.256626 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e7d91a4d-5b61-404e-a58b-cb426722f883/proxy-httpd/0.log" Feb 19 16:54:10 crc kubenswrapper[4810]: I0219 16:54:10.504127 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1723b820-73ac-49f3-8716-283bf2c05925/cinder-api-log/0.log" Feb 19 16:54:10 crc kubenswrapper[4810]: I0219 16:54:10.776766 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_f66b86b2-b164-4380-8a89-bb0cf5f833ef/probe/0.log" Feb 19 16:54:10 crc kubenswrapper[4810]: I0219 16:54:10.874628 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_f66b86b2-b164-4380-8a89-bb0cf5f833ef/cinder-backup/0.log" Feb 19 16:54:10 crc kubenswrapper[4810]: I0219 16:54:10.939724 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1723b820-73ac-49f3-8716-283bf2c05925/cinder-api/0.log" Feb 19 16:54:11 crc kubenswrapper[4810]: I0219 16:54:11.061532 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_48d5e3e9-853c-4988-8746-a6f74e1fe209/cinder-scheduler/0.log" Feb 19 16:54:11 crc kubenswrapper[4810]: I0219 16:54:11.135939 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_48d5e3e9-853c-4988-8746-a6f74e1fe209/probe/0.log" Feb 19 16:54:11 crc kubenswrapper[4810]: I0219 16:54:11.266886 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_20a46eb8-508d-45be-bf13-31aed23d1582/cinder-volume/0.log" Feb 19 16:54:11 crc kubenswrapper[4810]: I0219 16:54:11.533701 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_20a46eb8-508d-45be-bf13-31aed23d1582/probe/0.log" Feb 19 16:54:11 crc kubenswrapper[4810]: I0219 16:54:11.721916 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_74a12495-8d82-4296-9328-430af6d923b2/probe/0.log" Feb 19 16:54:11 crc kubenswrapper[4810]: I0219 16:54:11.767628 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_74a12495-8d82-4296-9328-430af6d923b2/cinder-volume/0.log" Feb 19 16:54:11 crc kubenswrapper[4810]: I0219 16:54:11.834286 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-6q498_2cff3a3e-0543-4fec-8f5b-5421be276386/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:11 crc kubenswrapper[4810]: I0219 16:54:11.987187 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-cm25d_7e1f4472-242a-40a0-a574-9c3119fdb705/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:12 crc kubenswrapper[4810]: I0219 16:54:12.150137 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-685d6df875-6hghq_7c074feb-2f7c-4f84-9ea8-5a9062e6b10a/init/0.log" Feb 19 16:54:12 crc kubenswrapper[4810]: I0219 16:54:12.279786 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-685d6df875-6hghq_7c074feb-2f7c-4f84-9ea8-5a9062e6b10a/init/0.log" Feb 19 16:54:12 crc kubenswrapper[4810]: I0219 16:54:12.365502 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-flmfl_e6255c5c-26d4-421f-9156-1bdd2f5adcc6/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:12 crc kubenswrapper[4810]: I0219 16:54:12.490968 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-685d6df875-6hghq_7c074feb-2f7c-4f84-9ea8-5a9062e6b10a/dnsmasq-dns/0.log" Feb 19 16:54:12 crc kubenswrapper[4810]: I0219 16:54:12.652372 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_41a4af93-6f80-4097-a964-2e3f3055fd3b/glance-httpd/0.log" Feb 19 16:54:12 crc kubenswrapper[4810]: I0219 16:54:12.676303 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_41a4af93-6f80-4097-a964-2e3f3055fd3b/glance-log/0.log" Feb 19 16:54:12 crc kubenswrapper[4810]: I0219 16:54:12.826920 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4/glance-httpd/0.log" Feb 19 16:54:12 crc kubenswrapper[4810]: I0219 16:54:12.843835 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6cc215f9-4ba7-4f38-aea0-bdb68abb2ad4/glance-log/0.log" Feb 19 16:54:13 crc kubenswrapper[4810]: I0219 16:54:13.027348 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f56498b8d-9gwmf_737d6629-747f-4d16-a545-d0070c20fe5d/horizon/0.log" Feb 19 16:54:13 crc kubenswrapper[4810]: I0219 16:54:13.293774 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-n7jh2_31bd8fe5-f0b6-4463-a545-bdeb0c33b182/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:13 crc kubenswrapper[4810]: I0219 16:54:13.380519 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-s8kk5_12c3c35e-69ec-4150-8f1e-7eeb6cc01d9b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:13 crc kubenswrapper[4810]: I0219 16:54:13.648694 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29525281-26qqv_8984eff3-6c82-4e2f-8bd6-1e820a450874/keystone-cron/0.log" Feb 19 16:54:13 crc kubenswrapper[4810]: I0219 16:54:13.661580 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f56498b8d-9gwmf_737d6629-747f-4d16-a545-d0070c20fe5d/horizon-log/0.log" Feb 19 16:54:13 crc kubenswrapper[4810]: I0219 16:54:13.873752 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9358dbee-2e5b-432d-98e0-6945d2e0d44b/kube-state-metrics/0.log" Feb 19 16:54:13 crc kubenswrapper[4810]: I0219 16:54:13.999719 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-mxd44_b0d687e9-21b0-4abe-b7ec-4fb050926f6c/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:14 crc kubenswrapper[4810]: I0219 16:54:14.135255 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6cd8bf58f4-ktsjk_95165d88-ea72-4785-8c1a-eea4d54466fb/keystone-api/0.log" Feb 19 16:54:14 crc kubenswrapper[4810]: I0219 16:54:14.496040 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dfcf65577-bd5w2_6528bdfd-3389-4776-826e-164fc5117682/neutron-httpd/0.log" Feb 19 16:54:14 crc kubenswrapper[4810]: I0219 16:54:14.504787 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-wpxhs_6650a3db-fdc1-4342-b8a8-cb91376e75c5/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:14 crc kubenswrapper[4810]: I0219 16:54:14.595150 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dfcf65577-bd5w2_6528bdfd-3389-4776-826e-164fc5117682/neutron-api/0.log" Feb 19 16:54:14 crc kubenswrapper[4810]: I0219 16:54:14.676167 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c/setup-container/0.log" Feb 19 16:54:14 crc kubenswrapper[4810]: I0219 16:54:14.948683 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c/setup-container/0.log" Feb 19 16:54:15 crc kubenswrapper[4810]: I0219 16:54:15.189362 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_4ad875c7-9e7b-4cc8-b6f7-e6fbeb95573c/rabbitmq/0.log" Feb 19 16:54:15 crc kubenswrapper[4810]: I0219 16:54:15.840578 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_65e6588c-3b7f-4719-beb6-90229629820f/nova-cell0-conductor-conductor/0.log" Feb 19 16:54:16 crc kubenswrapper[4810]: I0219 16:54:16.066621 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f93aa728-7924-4a75-ad48-cc174764cf3e/nova-cell1-conductor-conductor/0.log" Feb 19 16:54:16 crc kubenswrapper[4810]: I0219 16:54:16.597681 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5a7915d4-6c3f-4bc7-b21d-7d51b675640f/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 16:54:16 crc kubenswrapper[4810]: I0219 16:54:16.705533 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-nv8wh_cc5014f8-e5aa-47ad-8787-c187b0f7f0e1/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:16 crc kubenswrapper[4810]: I0219 16:54:16.799630 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6397af05-d030-46c2-8a0f-a90beb9b2502/nova-api-log/0.log" Feb 19 16:54:16 crc kubenswrapper[4810]: I0219 16:54:16.932868 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6397af05-d030-46c2-8a0f-a90beb9b2502/nova-api-api/0.log" Feb 19 16:54:17 crc kubenswrapper[4810]: I0219 16:54:17.052518 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f36ad344-e946-4221-892d-3ffe8fbdd59b/nova-metadata-log/0.log" Feb 19 16:54:17 crc kubenswrapper[4810]: I0219 16:54:17.306783 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_30d11a24-9722-4e7a-9be5-f2bd00128167/mysql-bootstrap/0.log" Feb 19 16:54:17 crc kubenswrapper[4810]: I0219 16:54:17.482615 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d70a0a1b-ed2d-46f1-aeb9-a335de9b06d4/nova-scheduler-scheduler/0.log" Feb 19 16:54:17 crc kubenswrapper[4810]: I0219 16:54:17.517065 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_30d11a24-9722-4e7a-9be5-f2bd00128167/mysql-bootstrap/0.log" Feb 19 16:54:17 crc kubenswrapper[4810]: I0219 16:54:17.549359 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_30d11a24-9722-4e7a-9be5-f2bd00128167/galera/0.log" Feb 19 16:54:17 crc kubenswrapper[4810]: I0219 16:54:17.734733 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c0ffb8ce-a356-4416-b96c-49db30ff1947/mysql-bootstrap/0.log" Feb 19 16:54:17 crc kubenswrapper[4810]: I0219 16:54:17.948451 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c0ffb8ce-a356-4416-b96c-49db30ff1947/mysql-bootstrap/0.log" Feb 19 16:54:17 crc kubenswrapper[4810]: I0219 16:54:17.963905 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c0ffb8ce-a356-4416-b96c-49db30ff1947/galera/0.log" Feb 19 16:54:18 crc kubenswrapper[4810]: I0219 16:54:18.204119 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ca8eb29b-bb26-446f-8a22-5da13ff9d5fa/openstackclient/0.log" Feb 19 16:54:18 crc kubenswrapper[4810]: I0219 16:54:18.287503 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tbt28_c03aad2b-8ca1-4310-8c11-3287fafcd66f/openstack-network-exporter/0.log" Feb 19 16:54:18 crc kubenswrapper[4810]: I0219 16:54:18.510825 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5t6ds_542da555-4f39-4dff-b378-5306135244db/ovsdb-server-init/0.log" Feb 19 16:54:18 crc kubenswrapper[4810]: I0219 16:54:18.696668 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5t6ds_542da555-4f39-4dff-b378-5306135244db/ovsdb-server-init/0.log" Feb 19 16:54:18 crc kubenswrapper[4810]: I0219 16:54:18.722719 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5t6ds_542da555-4f39-4dff-b378-5306135244db/ovsdb-server/0.log" Feb 19 16:54:18 crc kubenswrapper[4810]: I0219 16:54:18.917122 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-s5488_4a4fa57b-aa00-4866-b31e-df29f7f86480/ovn-controller/0.log" Feb 19 16:54:19 crc kubenswrapper[4810]: I0219 16:54:19.163728 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5gjxx_4defb710-c07f-4e63-9baf-45f51085abdc/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:19 crc kubenswrapper[4810]: I0219 16:54:19.176946 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5t6ds_542da555-4f39-4dff-b378-5306135244db/ovs-vswitchd/0.log" Feb 19 16:54:19 crc kubenswrapper[4810]: I0219 16:54:19.402378 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22facf67-088b-410b-986a-c9e09b3d8feb/openstack-network-exporter/0.log" Feb 19 16:54:19 crc kubenswrapper[4810]: I0219 16:54:19.402885 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22facf67-088b-410b-986a-c9e09b3d8feb/ovn-northd/0.log" Feb 19 16:54:19 crc kubenswrapper[4810]: I0219 16:54:19.577986 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f36ad344-e946-4221-892d-3ffe8fbdd59b/nova-metadata-metadata/0.log" Feb 19 16:54:19 crc kubenswrapper[4810]: I0219 16:54:19.590095 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bdffb5e6-13bb-4c08-ad3c-52d8ded85431/openstack-network-exporter/0.log" Feb 19 16:54:19 crc kubenswrapper[4810]: I0219 16:54:19.608074 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bdffb5e6-13bb-4c08-ad3c-52d8ded85431/ovsdbserver-nb/0.log" Feb 19 16:54:19 crc kubenswrapper[4810]: I0219 16:54:19.881842 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5b985124-01b7-430c-b5ea-b9fd095e5f5e/openstack-network-exporter/0.log" Feb 19 16:54:19 crc kubenswrapper[4810]: I0219 16:54:19.889553 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5b985124-01b7-430c-b5ea-b9fd095e5f5e/ovsdbserver-sb/0.log" Feb 19 16:54:20 crc kubenswrapper[4810]: I0219 16:54:20.153582 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7555d68ddd-xqj8c_565eac29-daec-4b40-bcb7-751696560c3a/placement-api/0.log" Feb 19 16:54:20 crc kubenswrapper[4810]: I0219 16:54:20.168427 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bf65af35-1e80-49a0-ada2-3bd027193193/init-config-reloader/0.log" Feb 19 16:54:20 crc kubenswrapper[4810]: I0219 16:54:20.340999 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bf65af35-1e80-49a0-ada2-3bd027193193/init-config-reloader/0.log" Feb 19 16:54:20 crc kubenswrapper[4810]: I0219 16:54:20.356167 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7555d68ddd-xqj8c_565eac29-daec-4b40-bcb7-751696560c3a/placement-log/0.log" Feb 19 16:54:20 crc kubenswrapper[4810]: I0219 16:54:20.364936 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bf65af35-1e80-49a0-ada2-3bd027193193/config-reloader/0.log" Feb 19 16:54:20 crc kubenswrapper[4810]: I0219 16:54:20.389131 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bf65af35-1e80-49a0-ada2-3bd027193193/prometheus/0.log" Feb 19 16:54:20 crc kubenswrapper[4810]: I0219 16:54:20.588515 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_03247cdb-4055-4d47-b433-848e363768ab/setup-container/0.log" Feb 19 16:54:20 crc kubenswrapper[4810]: I0219 16:54:20.616128 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_bf65af35-1e80-49a0-ada2-3bd027193193/thanos-sidecar/0.log" Feb 19 16:54:20 crc kubenswrapper[4810]: I0219 16:54:20.772704 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_03247cdb-4055-4d47-b433-848e363768ab/setup-container/0.log" Feb 19 16:54:20 crc kubenswrapper[4810]: I0219 16:54:20.891242 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b86448c3-669a-4132-b8ab-4db06347fa10/setup-container/0.log" Feb 19 16:54:20 crc kubenswrapper[4810]: I0219 16:54:20.936683 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_03247cdb-4055-4d47-b433-848e363768ab/rabbitmq/0.log" Feb 19 16:54:21 crc kubenswrapper[4810]: I0219 16:54:21.052207 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b86448c3-669a-4132-b8ab-4db06347fa10/setup-container/0.log" Feb 19 16:54:21 crc kubenswrapper[4810]: I0219 16:54:21.061196 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b86448c3-669a-4132-b8ab-4db06347fa10/rabbitmq/0.log" Feb 19 16:54:21 crc kubenswrapper[4810]: I0219 16:54:21.176156 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-ps669_69d67433-38d6-4368-a621-254a97b0c619/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:21 crc kubenswrapper[4810]: I0219 16:54:21.382758 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-x748x_32dc9563-791b-421e-a807-41cc1e775b3a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:21 crc kubenswrapper[4810]: I0219 16:54:21.475672 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-484qb_8c05e8c7-82f6-4ef1-a576-3c84e70dc570/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:21 crc kubenswrapper[4810]: I0219 16:54:21.618220 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-rf56v_e77512a1-b460-4008-9e59-5b38f3e9f925/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:21 crc kubenswrapper[4810]: I0219 16:54:21.663762 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gw579_e3132ed5-687d-4cd1-a539-35c4766a27c1/ssh-known-hosts-edpm-deployment/0.log" Feb 19 16:54:21 crc kubenswrapper[4810]: I0219 16:54:21.906954 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78bc5d479f-k79xx_9190a865-226b-487c-b0f9-2573f50f0eab/proxy-server/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.055525 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78bc5d479f-k79xx_9190a865-226b-487c-b0f9-2573f50f0eab/proxy-httpd/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.105285 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-hrdll_6c36f3e5-f790-4eda-9486-174f8624dad1/swift-ring-rebalance/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.328467 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/account-auditor/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.468592 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/account-reaper/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.486954 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/account-replicator/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.497389 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/container-auditor/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.588314 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/account-server/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.683552 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/container-server/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.687084 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/container-replicator/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.690993 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/container-updater/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.834105 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/object-auditor/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.909092 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/object-expirer/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.933700 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/object-replicator/0.log" Feb 19 16:54:22 crc kubenswrapper[4810]: I0219 16:54:22.964055 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/object-server/0.log" Feb 19 16:54:23 crc kubenswrapper[4810]: I0219 16:54:23.049364 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/object-updater/0.log" Feb 19 16:54:23 crc kubenswrapper[4810]: I0219 16:54:23.123031 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/swift-recon-cron/0.log" Feb 19 16:54:23 crc kubenswrapper[4810]: I0219 16:54:23.143584 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_37e2af25-5b30-4fb9-801e-f4a84d665540/rsync/0.log" Feb 19 16:54:23 crc kubenswrapper[4810]: I0219 16:54:23.299393 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-bfhfc_f7ca8c9a-db61-400f-9319-21590462f929/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:23 crc kubenswrapper[4810]: I0219 16:54:23.351268 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a4c017a9-c049-4baa-acc0-e08a25437c90/tempest-tests-tempest-tests-runner/0.log" Feb 19 16:54:23 crc kubenswrapper[4810]: I0219 16:54:23.487160 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d3094cb4-a81a-4f4f-b1d8-040c32bd3b2c/test-operator-logs-container/0.log" Feb 19 16:54:23 crc kubenswrapper[4810]: I0219 16:54:23.697726 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-zbkm7_412dc62a-d25e-4820-947b-582e310ddff1/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 16:54:24 crc kubenswrapper[4810]: I0219 16:54:24.422635 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_79f3ef20-3f3d-4fa2-8888-36d421303dfd/watcher-applier/0.log" Feb 19 16:54:24 crc kubenswrapper[4810]: I0219 16:54:24.971815 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_c11a7f60-4839-44aa-8615-98de657221f4/watcher-api-log/0.log" Feb 19 16:54:27 crc kubenswrapper[4810]: I0219 16:54:27.644402 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_fbbd48c8-49fb-4e51-9ba7-f7b37f681b3d/watcher-decision-engine/0.log" Feb 19 16:54:28 crc kubenswrapper[4810]: I0219 16:54:28.965597 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_c11a7f60-4839-44aa-8615-98de657221f4/watcher-api/0.log" Feb 19 16:54:31 crc kubenswrapper[4810]: I0219 16:54:31.080660 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_eb773d46-7b9f-4ca4-b33c-9b800b9eafd7/memcached/0.log" Feb 19 16:54:51 crc kubenswrapper[4810]: I0219 16:54:51.638692 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/util/0.log" Feb 19 16:54:53 crc kubenswrapper[4810]: I0219 16:54:53.137401 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/util/0.log" Feb 19 16:54:53 crc kubenswrapper[4810]: I0219 16:54:53.140255 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/pull/0.log" Feb 19 16:54:53 crc kubenswrapper[4810]: I0219 16:54:53.143103 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/pull/0.log" Feb 19 16:54:53 crc kubenswrapper[4810]: I0219 16:54:53.286207 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/util/0.log" Feb 19 16:54:53 crc kubenswrapper[4810]: I0219 16:54:53.325039 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/pull/0.log" Feb 19 16:54:53 crc kubenswrapper[4810]: I0219 16:54:53.361978 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ae2a058ea32e5d02c736557a2a81dfccb5f5284d9451ce18a33356457cfrzl4_124e176a-b011-4a5c-8e7c-ca027d881aea/extract/0.log" Feb 19 16:54:53 crc kubenswrapper[4810]: I0219 16:54:53.769121 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-z5fb9_52bb990c-eff0-4673-be27-d55d433bef0d/manager/0.log" Feb 19 16:54:54 crc kubenswrapper[4810]: I0219 16:54:54.119713 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-qz68t_2106e7b5-bb83-464a-a43f-943f22b55078/manager/0.log" Feb 19 16:54:54 crc kubenswrapper[4810]: I0219 16:54:54.312183 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-ffm66_e2942952-ce19-4053-91da-05623c954167/manager/0.log" Feb 19 16:54:54 crc kubenswrapper[4810]: I0219 16:54:54.520031 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-gnmlp_f0ab3643-d267-4902-af1f-cbcbdd7e5e41/manager/0.log" Feb 19 16:54:54 crc kubenswrapper[4810]: I0219 16:54:54.956868 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-nnps5_602535d1-0abe-471e-8409-31319af7bd4b/manager/0.log" Feb 19 16:54:55 crc kubenswrapper[4810]: I0219 16:54:55.381038 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-2kkhl_4898d4eb-d474-44bc-9a38-e36f300d132f/manager/0.log" Feb 19 16:54:55 crc kubenswrapper[4810]: I0219 16:54:55.527604 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-mkfsc_2126b31b-0444-43e4-a250-837f37d476aa/manager/0.log" Feb 19 16:54:55 crc kubenswrapper[4810]: I0219 16:54:55.687969 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-vc7cw_942f40af-0498-4865-99da-bdcd068ef449/manager/0.log" Feb 19 16:54:55 crc kubenswrapper[4810]: I0219 16:54:55.983993 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-6fqmd_e4a54646-39cf-4e42-9367-487ea4f7d8a4/manager/0.log" Feb 19 16:54:56 crc kubenswrapper[4810]: I0219 16:54:56.135946 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-jxmt5_1217b757-0f1c-4c4e-9abe-55875992915d/manager/0.log" Feb 19 16:54:56 crc kubenswrapper[4810]: I0219 16:54:56.222378 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-jjqv2_4aeb7cba-c1db-4dd6-92f7-dae7bd2e3f65/manager/0.log" Feb 19 16:54:56 crc kubenswrapper[4810]: I0219 16:54:56.472496 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-l67cq_fef4ad67-ccc8-4b32-bfb8-38dd6aa8e07e/manager/0.log" Feb 19 16:54:56 crc kubenswrapper[4810]: I0219 16:54:56.690081 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c9hlpt_c677bdd0-7248-4b02-9ab4-035c034a976a/manager/0.log" Feb 19 16:54:57 crc kubenswrapper[4810]: I0219 16:54:57.073660 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-69cffcd4f6-27gzn_e84ef702-2f13-42e9-ae2b-6f1465b67ff3/operator/0.log" Feb 19 16:54:57 crc kubenswrapper[4810]: I0219 16:54:57.218675 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gkft8_09f49ae7-b6fb-4ca5-9238-8bcf8d15ea95/registry-server/0.log" Feb 19 16:54:57 crc kubenswrapper[4810]: I0219 16:54:57.501874 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-5xnwd_e163eac0-ea1f-4002-9469-844240d7a44c/manager/0.log" Feb 19 16:54:57 crc kubenswrapper[4810]: I0219 16:54:57.717845 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-7tzvr_9f5779a5-4cda-40dc-831d-950f97eae317/manager/0.log" Feb 19 16:54:57 crc kubenswrapper[4810]: I0219 16:54:57.909613 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-k98c8_64ed590e-59b6-44c8-baee-324162d099b8/operator/0.log" Feb 19 16:54:58 crc kubenswrapper[4810]: I0219 16:54:58.126896 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-t44nb_aa5063d7-2358-4149-a3b9-ef2ce138faf4/manager/0.log" Feb 19 16:54:58 crc kubenswrapper[4810]: I0219 16:54:58.500825 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-pw9kt_69b7e96d-bce6-4653-998e-3bf5d159ae5a/manager/0.log" Feb 19 16:54:58 crc kubenswrapper[4810]: I0219 16:54:58.631608 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-px9zx_eaed166e-39b5-45ca-8a65-a22710d5fe37/manager/0.log" Feb 19 16:54:58 crc kubenswrapper[4810]: I0219 16:54:58.979478 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-798847869b-dlmvg_9c5af548-c722-4e6b-9309-1420838257e0/manager/0.log" Feb 19 16:54:59 crc kubenswrapper[4810]: I0219 16:54:59.061971 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6d464797d7-lrlqc_a6f83f3c-26f4-472f-9fcd-ae8049f1819a/manager/0.log" Feb 19 16:54:59 crc kubenswrapper[4810]: I0219 16:54:59.348447 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-vcbwg_3cff7e0c-86c7-4029-aefe-a7f7e8e2d76d/manager/0.log" Feb 19 16:55:05 crc kubenswrapper[4810]: I0219 16:55:05.088557 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-mzslt_91002269-9fe0-44d2-9dbd-9e4cf58274bf/manager/0.log" Feb 19 16:55:19 crc kubenswrapper[4810]: I0219 16:55:19.765369 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-895xv_64dc0d58-11d4-456b-97ab-a4d3ec28225b/control-plane-machine-set-operator/0.log" Feb 19 16:55:19 crc kubenswrapper[4810]: I0219 16:55:19.982743 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-l66cb_9a7776ca-1a56-4eca-9e44-ba1b7b15510f/kube-rbac-proxy/0.log" Feb 19 16:55:20 crc kubenswrapper[4810]: I0219 16:55:20.006147 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-l66cb_9a7776ca-1a56-4eca-9e44-ba1b7b15510f/machine-api-operator/0.log" Feb 19 16:55:35 crc kubenswrapper[4810]: I0219 16:55:35.625421 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-fwh4x_02206e32-6f49-407e-a02b-ce61e3daabf6/cert-manager-controller/0.log" Feb 19 16:55:35 crc kubenswrapper[4810]: I0219 16:55:35.796394 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-x4csq_54e755f0-9c2f-4d47-9979-b7b92996bab6/cert-manager-cainjector/0.log" Feb 19 16:55:35 crc kubenswrapper[4810]: I0219 16:55:35.841322 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-7lspv_1224bbe4-6d8e-410e-8990-3813efdd2003/cert-manager-webhook/0.log" Feb 19 16:55:49 crc kubenswrapper[4810]: I0219 16:55:49.537449 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:55:49 crc kubenswrapper[4810]: I0219 16:55:49.537939 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:55:50 crc kubenswrapper[4810]: I0219 16:55:50.988229 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-kdwwx_35fc682a-0cc9-4922-a2f2-60da1ddb1eb9/nmstate-console-plugin/0.log" Feb 19 16:55:51 crc kubenswrapper[4810]: I0219 16:55:51.150483 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bhhvv_c0eb0835-6df5-4a21-b309-f178a032d027/nmstate-handler/0.log" Feb 19 16:55:51 crc kubenswrapper[4810]: I0219 16:55:51.209205 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4g952_ce589619-7c2f-43db-ae4f-fb43be7b07f4/kube-rbac-proxy/0.log" Feb 19 16:55:51 crc kubenswrapper[4810]: I0219 16:55:51.290585 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4g952_ce589619-7c2f-43db-ae4f-fb43be7b07f4/nmstate-metrics/0.log" Feb 19 16:55:51 crc kubenswrapper[4810]: I0219 16:55:51.451123 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-p4hwg_f8300a06-7526-4da5-89a6-7fff8ff284c9/nmstate-operator/0.log" Feb 19 16:55:51 crc kubenswrapper[4810]: I0219 16:55:51.539259 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-ckvvq_db05e782-a3d7-4cbe-be3f-f6226d894864/nmstate-webhook/0.log" Feb 19 16:56:07 crc kubenswrapper[4810]: I0219 16:56:07.847551 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6jkkz_1656f52d-7771-4bbb-9642-b296d16b791e/prometheus-operator/0.log" Feb 19 16:56:08 crc kubenswrapper[4810]: I0219 16:56:08.032814 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt_d5debcf2-9629-4bb2-9133-f4b81748ff7d/prometheus-operator-admission-webhook/0.log" Feb 19 16:56:08 crc kubenswrapper[4810]: I0219 16:56:08.203343 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr_408628c0-0b2c-48f9-b849-ee1b124499e1/prometheus-operator-admission-webhook/0.log" Feb 19 16:56:08 crc kubenswrapper[4810]: I0219 16:56:08.233348 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-dk9c4_8bdf030e-92d8-45dc-ab6c-a7b241444677/operator/0.log" Feb 19 16:56:08 crc kubenswrapper[4810]: I0219 16:56:08.408635 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-2fdxm_c5968625-c396-4ae0-9846-c2ceb6baf655/perses-operator/0.log" Feb 19 16:56:19 crc kubenswrapper[4810]: I0219 16:56:19.538038 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:56:19 crc kubenswrapper[4810]: I0219 16:56:19.538534 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:56:24 crc kubenswrapper[4810]: I0219 16:56:24.193068 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jngcz_781d467e-8522-43a3-a552-1ceebc40cddd/kube-rbac-proxy/0.log" Feb 19 16:56:24 crc kubenswrapper[4810]: I0219 16:56:24.308253 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jngcz_781d467e-8522-43a3-a552-1ceebc40cddd/controller/0.log" Feb 19 16:56:24 crc kubenswrapper[4810]: I0219 16:56:24.413474 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-frr-files/0.log" Feb 19 16:56:24 crc kubenswrapper[4810]: I0219 16:56:24.641730 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-frr-files/0.log" Feb 19 16:56:24 crc kubenswrapper[4810]: I0219 16:56:24.642671 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-reloader/0.log" Feb 19 16:56:24 crc kubenswrapper[4810]: I0219 16:56:24.659993 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-metrics/0.log" Feb 19 16:56:24 crc kubenswrapper[4810]: I0219 16:56:24.684912 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-reloader/0.log" Feb 19 16:56:24 crc kubenswrapper[4810]: I0219 16:56:24.796179 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-reloader/0.log" Feb 19 16:56:24 crc kubenswrapper[4810]: I0219 16:56:24.833432 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-frr-files/0.log" Feb 19 16:56:24 crc kubenswrapper[4810]: I0219 16:56:24.859965 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-metrics/0.log" Feb 19 16:56:24 crc kubenswrapper[4810]: I0219 16:56:24.861733 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-metrics/0.log" Feb 19 16:56:25 crc kubenswrapper[4810]: I0219 16:56:25.155088 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-reloader/0.log" Feb 19 16:56:25 crc kubenswrapper[4810]: I0219 16:56:25.155198 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-frr-files/0.log" Feb 19 16:56:25 crc kubenswrapper[4810]: I0219 16:56:25.160639 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/cp-metrics/0.log" Feb 19 16:56:25 crc kubenswrapper[4810]: I0219 16:56:25.203038 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/controller/0.log" Feb 19 16:56:25 crc kubenswrapper[4810]: I0219 16:56:25.352908 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/kube-rbac-proxy/0.log" Feb 19 16:56:25 crc kubenswrapper[4810]: I0219 16:56:25.362684 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/frr-metrics/0.log" Feb 19 16:56:25 crc kubenswrapper[4810]: I0219 16:56:25.458481 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/kube-rbac-proxy-frr/0.log" Feb 19 16:56:25 crc kubenswrapper[4810]: I0219 16:56:25.860114 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/reloader/0.log" Feb 19 16:56:25 crc kubenswrapper[4810]: I0219 16:56:25.915274 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-cwj24_1ee9f8f3-05a8-4648-b48d-4975285346d7/frr-k8s-webhook-server/0.log" Feb 19 16:56:26 crc kubenswrapper[4810]: I0219 16:56:26.118608 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75f48c59dc-m5vm8_f26047c7-b8cc-4ce2-8a48-4b380ab225c0/manager/0.log" Feb 19 16:56:26 crc kubenswrapper[4810]: I0219 16:56:26.166037 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-595d5f7545-vfb4c_3d62866f-b047-419d-8eb0-848b0df84e63/webhook-server/0.log" Feb 19 16:56:26 crc kubenswrapper[4810]: I0219 16:56:26.317942 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hllgd_c9d97974-67d2-42e5-89fe-b6db106a47c4/kube-rbac-proxy/0.log" Feb 19 16:56:26 crc kubenswrapper[4810]: I0219 16:56:26.920589 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hllgd_c9d97974-67d2-42e5-89fe-b6db106a47c4/speaker/0.log" Feb 19 16:56:27 crc kubenswrapper[4810]: I0219 16:56:27.127203 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7rbxk_66c7e596-ffa3-4687-8c80-21acecbd8075/frr/0.log" Feb 19 16:56:41 crc kubenswrapper[4810]: I0219 16:56:41.523524 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/util/0.log" Feb 19 16:56:41 crc kubenswrapper[4810]: I0219 16:56:41.658926 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/util/0.log" Feb 19 16:56:41 crc kubenswrapper[4810]: I0219 16:56:41.693460 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/pull/0.log" Feb 19 16:56:41 crc kubenswrapper[4810]: I0219 16:56:41.783222 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/pull/0.log" Feb 19 16:56:41 crc kubenswrapper[4810]: I0219 16:56:41.945239 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/pull/0.log" Feb 19 16:56:41 crc kubenswrapper[4810]: I0219 16:56:41.949069 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/extract/0.log" Feb 19 16:56:41 crc kubenswrapper[4810]: I0219 16:56:41.966142 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgm4z_e54510d9-7d24-47bb-a55e-b50e7cff9fba/util/0.log" Feb 19 16:56:42 crc kubenswrapper[4810]: I0219 16:56:42.124179 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/util/0.log" Feb 19 16:56:42 crc kubenswrapper[4810]: I0219 16:56:42.315747 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/util/0.log" Feb 19 16:56:42 crc kubenswrapper[4810]: I0219 16:56:42.360333 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/pull/0.log" Feb 19 16:56:42 crc kubenswrapper[4810]: I0219 16:56:42.375409 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/pull/0.log" Feb 19 16:56:42 crc kubenswrapper[4810]: I0219 16:56:42.542080 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/util/0.log" Feb 19 16:56:42 crc kubenswrapper[4810]: I0219 16:56:42.546852 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/extract/0.log" Feb 19 16:56:42 crc kubenswrapper[4810]: I0219 16:56:42.572346 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccnrb_a861f8a3-be34-4fc0-96cb-42502d0a3bab/pull/0.log" Feb 19 16:56:42 crc kubenswrapper[4810]: I0219 16:56:42.695037 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/extract-utilities/0.log" Feb 19 16:56:42 crc kubenswrapper[4810]: I0219 16:56:42.878220 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/extract-utilities/0.log" Feb 19 16:56:42 crc kubenswrapper[4810]: I0219 16:56:42.890688 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/extract-content/0.log" Feb 19 16:56:42 crc kubenswrapper[4810]: I0219 16:56:42.942278 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/extract-content/0.log" Feb 19 16:56:43 crc kubenswrapper[4810]: I0219 16:56:43.048206 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/extract-utilities/0.log" Feb 19 16:56:43 crc kubenswrapper[4810]: I0219 16:56:43.091408 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/extract-content/0.log" Feb 19 16:56:43 crc kubenswrapper[4810]: I0219 16:56:43.288097 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5n9gc_6dde1ea5-68be-4851-8816-3c7302dc2579/registry-server/0.log" Feb 19 16:56:43 crc kubenswrapper[4810]: I0219 16:56:43.317444 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wtwq9_7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1/extract-utilities/0.log" Feb 19 16:56:43 crc kubenswrapper[4810]: I0219 16:56:43.460077 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wtwq9_7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1/extract-content/0.log" Feb 19 16:56:43 crc kubenswrapper[4810]: I0219 16:56:43.481429 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wtwq9_7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1/extract-utilities/0.log" Feb 19 16:56:43 crc kubenswrapper[4810]: I0219 16:56:43.492873 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wtwq9_7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1/extract-content/0.log" Feb 19 16:56:43 crc kubenswrapper[4810]: I0219 16:56:43.707706 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wtwq9_7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1/extract-utilities/0.log" Feb 19 16:56:43 crc kubenswrapper[4810]: I0219 16:56:43.771018 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wtwq9_7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1/extract-content/0.log" Feb 19 16:56:43 crc kubenswrapper[4810]: I0219 16:56:43.889689 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wtwq9_7f1a10dd-dddb-4bcc-ad11-42e2ab29a0f1/registry-server/0.log" Feb 19 16:56:43 crc kubenswrapper[4810]: I0219 16:56:43.923348 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/util/0.log" Feb 19 16:56:44 crc kubenswrapper[4810]: I0219 16:56:44.079471 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/util/0.log" Feb 19 16:56:44 crc kubenswrapper[4810]: I0219 16:56:44.111123 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/pull/0.log" Feb 19 16:56:44 crc kubenswrapper[4810]: I0219 16:56:44.111144 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/pull/0.log" Feb 19 16:56:44 crc kubenswrapper[4810]: I0219 16:56:44.266180 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/pull/0.log" Feb 19 16:56:44 crc kubenswrapper[4810]: I0219 16:56:44.291507 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/util/0.log" Feb 19 16:56:44 crc kubenswrapper[4810]: I0219 16:56:44.314692 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawk578_815191f4-9d3a-4003-a32f-de4f76c9c15f/extract/0.log" Feb 19 16:56:44 crc kubenswrapper[4810]: I0219 16:56:44.446423 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sm9wk_41d27e40-a89e-4fd6-8106-824c5a257f25/marketplace-operator/0.log" Feb 19 16:56:44 crc kubenswrapper[4810]: I0219 16:56:44.476216 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/extract-utilities/0.log" Feb 19 16:56:44 crc kubenswrapper[4810]: I0219 16:56:44.700705 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/extract-utilities/0.log" Feb 19 16:56:44 crc kubenswrapper[4810]: I0219 16:56:44.704789 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/extract-content/0.log" Feb 19 16:56:44 crc kubenswrapper[4810]: I0219 16:56:44.705161 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/extract-content/0.log" Feb 19 16:56:44 crc kubenswrapper[4810]: I0219 16:56:44.859149 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/extract-content/0.log" Feb 19 16:56:44 crc kubenswrapper[4810]: I0219 16:56:44.861486 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/extract-utilities/0.log" Feb 19 16:56:45 crc kubenswrapper[4810]: I0219 16:56:45.071242 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/extract-utilities/0.log" Feb 19 16:56:45 crc kubenswrapper[4810]: I0219 16:56:45.071923 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tgtg8_b3c2bc60-712d-4ef6-b461-ad683f51f2e4/registry-server/0.log" Feb 19 16:56:45 crc kubenswrapper[4810]: I0219 16:56:45.237005 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/extract-content/0.log" Feb 19 16:56:45 crc kubenswrapper[4810]: I0219 16:56:45.265993 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/extract-content/0.log" Feb 19 16:56:45 crc kubenswrapper[4810]: I0219 16:56:45.289678 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/extract-utilities/0.log" Feb 19 16:56:45 crc kubenswrapper[4810]: I0219 16:56:45.533350 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/extract-utilities/0.log" Feb 19 16:56:45 crc kubenswrapper[4810]: I0219 16:56:45.534451 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/extract-content/0.log" Feb 19 16:56:46 crc kubenswrapper[4810]: I0219 16:56:46.048723 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bwbpv_05b0324a-36c1-419b-8bdd-e41ad42a6a3f/registry-server/0.log" Feb 19 16:56:49 crc kubenswrapper[4810]: I0219 16:56:49.537319 4810 patch_prober.go:28] interesting pod/machine-config-daemon-t499d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 16:56:49 crc kubenswrapper[4810]: I0219 16:56:49.537809 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 16:56:49 crc kubenswrapper[4810]: I0219 16:56:49.537881 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t499d" Feb 19 16:56:49 crc kubenswrapper[4810]: I0219 16:56:49.539022 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b"} pod="openshift-machine-config-operator/machine-config-daemon-t499d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 16:56:49 crc kubenswrapper[4810]: I0219 16:56:49.539126 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerName="machine-config-daemon" containerID="cri-o://a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" gracePeriod=600 Feb 19 16:56:49 crc kubenswrapper[4810]: E0219 16:56:49.687986 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:56:50 crc kubenswrapper[4810]: I0219 16:56:50.171259 4810 generic.go:334] "Generic (PLEG): container finished" podID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" exitCode=0 Feb 19 16:56:50 crc kubenswrapper[4810]: I0219 16:56:50.171314 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerDied","Data":"a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b"} Feb 19 16:56:50 crc kubenswrapper[4810]: I0219 16:56:50.171458 4810 scope.go:117] "RemoveContainer" containerID="30b913c55f9740186a8530347003d7b1c641faf95ecc5adefecfaffe54fb5ed2" Feb 19 16:56:50 crc kubenswrapper[4810]: I0219 16:56:50.172963 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:56:50 crc kubenswrapper[4810]: E0219 16:56:50.173723 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:56:59 crc kubenswrapper[4810]: I0219 16:56:59.375938 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d6bf9b6-qpvdt_d5debcf2-9629-4bb2-9133-f4b81748ff7d/prometheus-operator-admission-webhook/0.log" Feb 19 16:56:59 crc kubenswrapper[4810]: I0219 16:56:59.449095 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6jkkz_1656f52d-7771-4bbb-9642-b296d16b791e/prometheus-operator/0.log" Feb 19 16:56:59 crc kubenswrapper[4810]: I0219 16:56:59.506421 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d6bf9b6-vrfvr_408628c0-0b2c-48f9-b849-ee1b124499e1/prometheus-operator-admission-webhook/0.log" Feb 19 16:56:59 crc kubenswrapper[4810]: I0219 16:56:59.624556 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-dk9c4_8bdf030e-92d8-45dc-ab6c-a7b241444677/operator/0.log" Feb 19 16:56:59 crc kubenswrapper[4810]: I0219 16:56:59.657446 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-2fdxm_c5968625-c396-4ae0-9846-c2ceb6baf655/perses-operator/0.log" Feb 19 16:57:03 crc kubenswrapper[4810]: I0219 16:57:03.440571 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:57:03 crc kubenswrapper[4810]: E0219 16:57:03.441127 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:57:17 crc kubenswrapper[4810]: I0219 16:57:17.439866 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:57:17 crc kubenswrapper[4810]: E0219 16:57:17.440639 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:57:32 crc kubenswrapper[4810]: I0219 16:57:32.440118 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:57:32 crc kubenswrapper[4810]: E0219 16:57:32.441219 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:57:45 crc kubenswrapper[4810]: I0219 16:57:45.441609 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:57:45 crc kubenswrapper[4810]: E0219 16:57:45.442708 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:57:59 crc kubenswrapper[4810]: I0219 16:57:59.439847 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:57:59 crc kubenswrapper[4810]: E0219 16:57:59.440734 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:58:13 crc kubenswrapper[4810]: I0219 16:58:13.440241 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:58:13 crc kubenswrapper[4810]: E0219 16:58:13.441553 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:58:26 crc kubenswrapper[4810]: I0219 16:58:26.439652 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:58:26 crc kubenswrapper[4810]: E0219 16:58:26.440952 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:58:38 crc kubenswrapper[4810]: I0219 16:58:38.440566 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:58:38 crc kubenswrapper[4810]: E0219 16:58:38.441402 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:58:45 crc kubenswrapper[4810]: I0219 16:58:45.645575 4810 scope.go:117] "RemoveContainer" containerID="1827e095c322c4555855b6ea50a05730da42c19ada929bb9656b95a872f9917c" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.602075 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dbdz2"] Feb 19 16:58:48 crc kubenswrapper[4810]: E0219 16:58:48.603013 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aae9737-d017-4a11-8323-cd0354ba09aa" containerName="container-00" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.603034 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aae9737-d017-4a11-8323-cd0354ba09aa" containerName="container-00" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.603284 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aae9737-d017-4a11-8323-cd0354ba09aa" containerName="container-00" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.605271 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.638006 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbdz2"] Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.785005 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4333000-6b13-4953-b6d8-94b72a034fa3-catalog-content\") pod \"certified-operators-dbdz2\" (UID: \"b4333000-6b13-4953-b6d8-94b72a034fa3\") " pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.785287 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4333000-6b13-4953-b6d8-94b72a034fa3-utilities\") pod \"certified-operators-dbdz2\" (UID: \"b4333000-6b13-4953-b6d8-94b72a034fa3\") " pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.785400 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jsfx\" (UniqueName: \"kubernetes.io/projected/b4333000-6b13-4953-b6d8-94b72a034fa3-kube-api-access-9jsfx\") pod \"certified-operators-dbdz2\" (UID: \"b4333000-6b13-4953-b6d8-94b72a034fa3\") " pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.887063 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jsfx\" (UniqueName: \"kubernetes.io/projected/b4333000-6b13-4953-b6d8-94b72a034fa3-kube-api-access-9jsfx\") pod \"certified-operators-dbdz2\" (UID: \"b4333000-6b13-4953-b6d8-94b72a034fa3\") " pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.887247 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4333000-6b13-4953-b6d8-94b72a034fa3-catalog-content\") pod \"certified-operators-dbdz2\" (UID: \"b4333000-6b13-4953-b6d8-94b72a034fa3\") " pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.887299 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4333000-6b13-4953-b6d8-94b72a034fa3-utilities\") pod \"certified-operators-dbdz2\" (UID: \"b4333000-6b13-4953-b6d8-94b72a034fa3\") " pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.888021 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4333000-6b13-4953-b6d8-94b72a034fa3-utilities\") pod \"certified-operators-dbdz2\" (UID: \"b4333000-6b13-4953-b6d8-94b72a034fa3\") " pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.888789 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4333000-6b13-4953-b6d8-94b72a034fa3-catalog-content\") pod \"certified-operators-dbdz2\" (UID: \"b4333000-6b13-4953-b6d8-94b72a034fa3\") " pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.925280 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jsfx\" (UniqueName: \"kubernetes.io/projected/b4333000-6b13-4953-b6d8-94b72a034fa3-kube-api-access-9jsfx\") pod \"certified-operators-dbdz2\" (UID: \"b4333000-6b13-4953-b6d8-94b72a034fa3\") " pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:58:48 crc kubenswrapper[4810]: I0219 16:58:48.947121 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:58:49 crc kubenswrapper[4810]: I0219 16:58:49.691081 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbdz2"] Feb 19 16:58:50 crc kubenswrapper[4810]: I0219 16:58:50.390376 4810 generic.go:334] "Generic (PLEG): container finished" podID="b4333000-6b13-4953-b6d8-94b72a034fa3" containerID="d15cf9afbb1bcf445a79cbff5a4a42a15c43d7239e3785870c3d7d15fdb487c7" exitCode=0 Feb 19 16:58:50 crc kubenswrapper[4810]: I0219 16:58:50.390948 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbdz2" event={"ID":"b4333000-6b13-4953-b6d8-94b72a034fa3","Type":"ContainerDied","Data":"d15cf9afbb1bcf445a79cbff5a4a42a15c43d7239e3785870c3d7d15fdb487c7"} Feb 19 16:58:50 crc kubenswrapper[4810]: I0219 16:58:50.390981 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbdz2" event={"ID":"b4333000-6b13-4953-b6d8-94b72a034fa3","Type":"ContainerStarted","Data":"26f06d09242947bbcb3eda6d9dd4c0cb9850bbf8a6d4e737adaca0b70117f512"} Feb 19 16:58:50 crc kubenswrapper[4810]: I0219 16:58:50.396238 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 16:58:51 crc kubenswrapper[4810]: I0219 16:58:51.448965 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:58:51 crc kubenswrapper[4810]: E0219 16:58:51.449244 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:58:55 crc kubenswrapper[4810]: I0219 16:58:55.450882 4810 generic.go:334] "Generic (PLEG): container finished" podID="b4333000-6b13-4953-b6d8-94b72a034fa3" containerID="e0323915d761d2f378bea3e13516dd792761c267e30344e8801a5e56c377084f" exitCode=0 Feb 19 16:58:55 crc kubenswrapper[4810]: I0219 16:58:55.479693 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbdz2" event={"ID":"b4333000-6b13-4953-b6d8-94b72a034fa3","Type":"ContainerDied","Data":"e0323915d761d2f378bea3e13516dd792761c267e30344e8801a5e56c377084f"} Feb 19 16:58:57 crc kubenswrapper[4810]: I0219 16:58:57.498756 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dbdz2" event={"ID":"b4333000-6b13-4953-b6d8-94b72a034fa3","Type":"ContainerStarted","Data":"43ae22d790c0f3771637fac0124f2bb32d5470c25cb7395e96bb10929efd5241"} Feb 19 16:58:57 crc kubenswrapper[4810]: I0219 16:58:57.532428 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dbdz2" podStartSLOduration=3.950747642 podStartE2EDuration="9.532408155s" podCreationTimestamp="2026-02-19 16:58:48 +0000 UTC" firstStartedPulling="2026-02-19 16:58:50.395871408 +0000 UTC m=+6559.877901542" lastFinishedPulling="2026-02-19 16:58:55.977531941 +0000 UTC m=+6565.459562055" observedRunningTime="2026-02-19 16:58:57.524711636 +0000 UTC m=+6567.006741770" watchObservedRunningTime="2026-02-19 16:58:57.532408155 +0000 UTC m=+6567.014438299" Feb 19 16:58:58 crc kubenswrapper[4810]: I0219 16:58:58.947627 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:58:58 crc kubenswrapper[4810]: I0219 16:58:58.947956 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:58:59 crc kubenswrapper[4810]: I0219 16:58:59.023239 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:59:05 crc kubenswrapper[4810]: I0219 16:59:05.602370 4810 generic.go:334] "Generic (PLEG): container finished" podID="0adbd447-568e-48c8-ab76-3d2f20e3f315" containerID="07153056d19231ca3d7ab4d1dd2cd03f17f8e8578ea20e8a31486c974d0f0b3d" exitCode=0 Feb 19 16:59:05 crc kubenswrapper[4810]: I0219 16:59:05.602406 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-944q7/must-gather-452lm" event={"ID":"0adbd447-568e-48c8-ab76-3d2f20e3f315","Type":"ContainerDied","Data":"07153056d19231ca3d7ab4d1dd2cd03f17f8e8578ea20e8a31486c974d0f0b3d"} Feb 19 16:59:05 crc kubenswrapper[4810]: I0219 16:59:05.603953 4810 scope.go:117] "RemoveContainer" containerID="07153056d19231ca3d7ab4d1dd2cd03f17f8e8578ea20e8a31486c974d0f0b3d" Feb 19 16:59:06 crc kubenswrapper[4810]: I0219 16:59:06.439646 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:59:06 crc kubenswrapper[4810]: E0219 16:59:06.440095 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:59:06 crc kubenswrapper[4810]: I0219 16:59:06.646851 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-944q7_must-gather-452lm_0adbd447-568e-48c8-ab76-3d2f20e3f315/gather/0.log" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.015044 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dbdz2" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.093446 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dbdz2"] Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.149232 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5n9gc"] Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.149474 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5n9gc" podUID="6dde1ea5-68be-4851-8816-3c7302dc2579" containerName="registry-server" containerID="cri-o://c46982b47a689551c84c8a94559da6c4aba508584e390cc906dbadf87e5b2444" gracePeriod=2 Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.609911 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.651844 4810 generic.go:334] "Generic (PLEG): container finished" podID="6dde1ea5-68be-4851-8816-3c7302dc2579" containerID="c46982b47a689551c84c8a94559da6c4aba508584e390cc906dbadf87e5b2444" exitCode=0 Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.653083 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5n9gc" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.653857 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5n9gc" event={"ID":"6dde1ea5-68be-4851-8816-3c7302dc2579","Type":"ContainerDied","Data":"c46982b47a689551c84c8a94559da6c4aba508584e390cc906dbadf87e5b2444"} Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.653997 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5n9gc" event={"ID":"6dde1ea5-68be-4851-8816-3c7302dc2579","Type":"ContainerDied","Data":"280973c57a3f46eeb314d42cd9ad11d9e2b63939b685ed7e15c043b29db4262c"} Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.654104 4810 scope.go:117] "RemoveContainer" containerID="c46982b47a689551c84c8a94559da6c4aba508584e390cc906dbadf87e5b2444" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.687099 4810 scope.go:117] "RemoveContainer" containerID="8046aa230ca2b4fe6d7939a4777943e6f17a788dfab6e0b652084234b84eaf5e" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.757560 4810 scope.go:117] "RemoveContainer" containerID="c60a4f5da78e4d16d81eacb1c4cfea0a5df0810a9ff302334c88f13711140033" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.790933 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dde1ea5-68be-4851-8816-3c7302dc2579-catalog-content\") pod \"6dde1ea5-68be-4851-8816-3c7302dc2579\" (UID: \"6dde1ea5-68be-4851-8816-3c7302dc2579\") " Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.791121 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dde1ea5-68be-4851-8816-3c7302dc2579-utilities\") pod \"6dde1ea5-68be-4851-8816-3c7302dc2579\" (UID: \"6dde1ea5-68be-4851-8816-3c7302dc2579\") " Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.791185 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j8ln\" (UniqueName: \"kubernetes.io/projected/6dde1ea5-68be-4851-8816-3c7302dc2579-kube-api-access-5j8ln\") pod \"6dde1ea5-68be-4851-8816-3c7302dc2579\" (UID: \"6dde1ea5-68be-4851-8816-3c7302dc2579\") " Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.800503 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dde1ea5-68be-4851-8816-3c7302dc2579-utilities" (OuterVolumeSpecName: "utilities") pod "6dde1ea5-68be-4851-8816-3c7302dc2579" (UID: "6dde1ea5-68be-4851-8816-3c7302dc2579"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.801606 4810 scope.go:117] "RemoveContainer" containerID="c46982b47a689551c84c8a94559da6c4aba508584e390cc906dbadf87e5b2444" Feb 19 16:59:09 crc kubenswrapper[4810]: E0219 16:59:09.805560 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c46982b47a689551c84c8a94559da6c4aba508584e390cc906dbadf87e5b2444\": container with ID starting with c46982b47a689551c84c8a94559da6c4aba508584e390cc906dbadf87e5b2444 not found: ID does not exist" containerID="c46982b47a689551c84c8a94559da6c4aba508584e390cc906dbadf87e5b2444" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.805603 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c46982b47a689551c84c8a94559da6c4aba508584e390cc906dbadf87e5b2444"} err="failed to get container status \"c46982b47a689551c84c8a94559da6c4aba508584e390cc906dbadf87e5b2444\": rpc error: code = NotFound desc = could not find container \"c46982b47a689551c84c8a94559da6c4aba508584e390cc906dbadf87e5b2444\": container with ID starting with c46982b47a689551c84c8a94559da6c4aba508584e390cc906dbadf87e5b2444 not found: ID does not exist" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.805628 4810 scope.go:117] "RemoveContainer" containerID="8046aa230ca2b4fe6d7939a4777943e6f17a788dfab6e0b652084234b84eaf5e" Feb 19 16:59:09 crc kubenswrapper[4810]: E0219 16:59:09.806814 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8046aa230ca2b4fe6d7939a4777943e6f17a788dfab6e0b652084234b84eaf5e\": container with ID starting with 8046aa230ca2b4fe6d7939a4777943e6f17a788dfab6e0b652084234b84eaf5e not found: ID does not exist" containerID="8046aa230ca2b4fe6d7939a4777943e6f17a788dfab6e0b652084234b84eaf5e" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.806955 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8046aa230ca2b4fe6d7939a4777943e6f17a788dfab6e0b652084234b84eaf5e"} err="failed to get container status \"8046aa230ca2b4fe6d7939a4777943e6f17a788dfab6e0b652084234b84eaf5e\": rpc error: code = NotFound desc = could not find container \"8046aa230ca2b4fe6d7939a4777943e6f17a788dfab6e0b652084234b84eaf5e\": container with ID starting with 8046aa230ca2b4fe6d7939a4777943e6f17a788dfab6e0b652084234b84eaf5e not found: ID does not exist" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.807076 4810 scope.go:117] "RemoveContainer" containerID="c60a4f5da78e4d16d81eacb1c4cfea0a5df0810a9ff302334c88f13711140033" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.810571 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dde1ea5-68be-4851-8816-3c7302dc2579-kube-api-access-5j8ln" (OuterVolumeSpecName: "kube-api-access-5j8ln") pod "6dde1ea5-68be-4851-8816-3c7302dc2579" (UID: "6dde1ea5-68be-4851-8816-3c7302dc2579"). InnerVolumeSpecName "kube-api-access-5j8ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:59:09 crc kubenswrapper[4810]: E0219 16:59:09.811506 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c60a4f5da78e4d16d81eacb1c4cfea0a5df0810a9ff302334c88f13711140033\": container with ID starting with c60a4f5da78e4d16d81eacb1c4cfea0a5df0810a9ff302334c88f13711140033 not found: ID does not exist" containerID="c60a4f5da78e4d16d81eacb1c4cfea0a5df0810a9ff302334c88f13711140033" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.811549 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c60a4f5da78e4d16d81eacb1c4cfea0a5df0810a9ff302334c88f13711140033"} err="failed to get container status \"c60a4f5da78e4d16d81eacb1c4cfea0a5df0810a9ff302334c88f13711140033\": rpc error: code = NotFound desc = could not find container \"c60a4f5da78e4d16d81eacb1c4cfea0a5df0810a9ff302334c88f13711140033\": container with ID starting with c60a4f5da78e4d16d81eacb1c4cfea0a5df0810a9ff302334c88f13711140033 not found: ID does not exist" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.842441 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dde1ea5-68be-4851-8816-3c7302dc2579-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dde1ea5-68be-4851-8816-3c7302dc2579" (UID: "6dde1ea5-68be-4851-8816-3c7302dc2579"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.894384 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dde1ea5-68be-4851-8816-3c7302dc2579-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.894414 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dde1ea5-68be-4851-8816-3c7302dc2579-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.894428 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j8ln\" (UniqueName: \"kubernetes.io/projected/6dde1ea5-68be-4851-8816-3c7302dc2579-kube-api-access-5j8ln\") on node \"crc\" DevicePath \"\"" Feb 19 16:59:09 crc kubenswrapper[4810]: I0219 16:59:09.997680 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5n9gc"] Feb 19 16:59:10 crc kubenswrapper[4810]: I0219 16:59:10.007478 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5n9gc"] Feb 19 16:59:11 crc kubenswrapper[4810]: I0219 16:59:11.451089 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dde1ea5-68be-4851-8816-3c7302dc2579" path="/var/lib/kubelet/pods/6dde1ea5-68be-4851-8816-3c7302dc2579/volumes" Feb 19 16:59:17 crc kubenswrapper[4810]: I0219 16:59:17.440468 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:59:17 crc kubenswrapper[4810]: E0219 16:59:17.441963 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:59:20 crc kubenswrapper[4810]: I0219 16:59:20.511612 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-944q7/must-gather-452lm"] Feb 19 16:59:20 crc kubenswrapper[4810]: I0219 16:59:20.511968 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-944q7/must-gather-452lm" podUID="0adbd447-568e-48c8-ab76-3d2f20e3f315" containerName="copy" containerID="cri-o://35e06e2cca1fb991fa940df76a9f88f0b3d758223a060db89405e9f9e28e0bdb" gracePeriod=2 Feb 19 16:59:20 crc kubenswrapper[4810]: I0219 16:59:20.521497 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-944q7/must-gather-452lm"] Feb 19 16:59:20 crc kubenswrapper[4810]: I0219 16:59:20.992831 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-944q7_must-gather-452lm_0adbd447-568e-48c8-ab76-3d2f20e3f315/copy/0.log" Feb 19 16:59:20 crc kubenswrapper[4810]: I0219 16:59:20.993542 4810 generic.go:334] "Generic (PLEG): container finished" podID="0adbd447-568e-48c8-ab76-3d2f20e3f315" containerID="35e06e2cca1fb991fa940df76a9f88f0b3d758223a060db89405e9f9e28e0bdb" exitCode=143 Feb 19 16:59:20 crc kubenswrapper[4810]: I0219 16:59:20.993602 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf36fcfbf554bf4a4c84825c8d6c3208469e142f948d23106d4b61d616d08799" Feb 19 16:59:21 crc kubenswrapper[4810]: I0219 16:59:21.046312 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-944q7_must-gather-452lm_0adbd447-568e-48c8-ab76-3d2f20e3f315/copy/0.log" Feb 19 16:59:21 crc kubenswrapper[4810]: I0219 16:59:21.046930 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/must-gather-452lm" Feb 19 16:59:21 crc kubenswrapper[4810]: I0219 16:59:21.162678 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0adbd447-568e-48c8-ab76-3d2f20e3f315-must-gather-output\") pod \"0adbd447-568e-48c8-ab76-3d2f20e3f315\" (UID: \"0adbd447-568e-48c8-ab76-3d2f20e3f315\") " Feb 19 16:59:21 crc kubenswrapper[4810]: I0219 16:59:21.162753 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6rmt\" (UniqueName: \"kubernetes.io/projected/0adbd447-568e-48c8-ab76-3d2f20e3f315-kube-api-access-f6rmt\") pod \"0adbd447-568e-48c8-ab76-3d2f20e3f315\" (UID: \"0adbd447-568e-48c8-ab76-3d2f20e3f315\") " Feb 19 16:59:21 crc kubenswrapper[4810]: I0219 16:59:21.170498 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0adbd447-568e-48c8-ab76-3d2f20e3f315-kube-api-access-f6rmt" (OuterVolumeSpecName: "kube-api-access-f6rmt") pod "0adbd447-568e-48c8-ab76-3d2f20e3f315" (UID: "0adbd447-568e-48c8-ab76-3d2f20e3f315"). InnerVolumeSpecName "kube-api-access-f6rmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:59:21 crc kubenswrapper[4810]: I0219 16:59:21.265953 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6rmt\" (UniqueName: \"kubernetes.io/projected/0adbd447-568e-48c8-ab76-3d2f20e3f315-kube-api-access-f6rmt\") on node \"crc\" DevicePath \"\"" Feb 19 16:59:21 crc kubenswrapper[4810]: I0219 16:59:21.370409 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0adbd447-568e-48c8-ab76-3d2f20e3f315-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0adbd447-568e-48c8-ab76-3d2f20e3f315" (UID: "0adbd447-568e-48c8-ab76-3d2f20e3f315"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:59:21 crc kubenswrapper[4810]: I0219 16:59:21.452978 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0adbd447-568e-48c8-ab76-3d2f20e3f315" path="/var/lib/kubelet/pods/0adbd447-568e-48c8-ab76-3d2f20e3f315/volumes" Feb 19 16:59:21 crc kubenswrapper[4810]: I0219 16:59:21.469882 4810 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0adbd447-568e-48c8-ab76-3d2f20e3f315-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 16:59:22 crc kubenswrapper[4810]: I0219 16:59:22.002879 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-944q7/must-gather-452lm" Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.841207 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l88mg"] Feb 19 16:59:28 crc kubenswrapper[4810]: E0219 16:59:28.842052 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dde1ea5-68be-4851-8816-3c7302dc2579" containerName="registry-server" Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.842069 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dde1ea5-68be-4851-8816-3c7302dc2579" containerName="registry-server" Feb 19 16:59:28 crc kubenswrapper[4810]: E0219 16:59:28.842088 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dde1ea5-68be-4851-8816-3c7302dc2579" containerName="extract-content" Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.842096 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dde1ea5-68be-4851-8816-3c7302dc2579" containerName="extract-content" Feb 19 16:59:28 crc kubenswrapper[4810]: E0219 16:59:28.842122 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dde1ea5-68be-4851-8816-3c7302dc2579" containerName="extract-utilities" Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.842132 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dde1ea5-68be-4851-8816-3c7302dc2579" containerName="extract-utilities" Feb 19 16:59:28 crc kubenswrapper[4810]: E0219 16:59:28.842155 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0adbd447-568e-48c8-ab76-3d2f20e3f315" containerName="copy" Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.842163 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0adbd447-568e-48c8-ab76-3d2f20e3f315" containerName="copy" Feb 19 16:59:28 crc kubenswrapper[4810]: E0219 16:59:28.842181 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0adbd447-568e-48c8-ab76-3d2f20e3f315" containerName="gather" Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.842188 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0adbd447-568e-48c8-ab76-3d2f20e3f315" containerName="gather" Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.843713 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dde1ea5-68be-4851-8816-3c7302dc2579" containerName="registry-server" Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.843746 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="0adbd447-568e-48c8-ab76-3d2f20e3f315" containerName="copy" Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.843794 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="0adbd447-568e-48c8-ab76-3d2f20e3f315" containerName="gather" Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.846543 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.878173 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l88mg"] Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.936569 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1500ae-62df-4106-92a9-292e4a530f9b-utilities\") pod \"redhat-marketplace-l88mg\" (UID: \"fb1500ae-62df-4106-92a9-292e4a530f9b\") " pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.936663 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1500ae-62df-4106-92a9-292e4a530f9b-catalog-content\") pod \"redhat-marketplace-l88mg\" (UID: \"fb1500ae-62df-4106-92a9-292e4a530f9b\") " pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:28 crc kubenswrapper[4810]: I0219 16:59:28.936803 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4449\" (UniqueName: \"kubernetes.io/projected/fb1500ae-62df-4106-92a9-292e4a530f9b-kube-api-access-d4449\") pod \"redhat-marketplace-l88mg\" (UID: \"fb1500ae-62df-4106-92a9-292e4a530f9b\") " pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.040181 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1500ae-62df-4106-92a9-292e4a530f9b-utilities\") pod \"redhat-marketplace-l88mg\" (UID: \"fb1500ae-62df-4106-92a9-292e4a530f9b\") " pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.040388 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1500ae-62df-4106-92a9-292e4a530f9b-catalog-content\") pod \"redhat-marketplace-l88mg\" (UID: \"fb1500ae-62df-4106-92a9-292e4a530f9b\") " pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.040510 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4449\" (UniqueName: \"kubernetes.io/projected/fb1500ae-62df-4106-92a9-292e4a530f9b-kube-api-access-d4449\") pod \"redhat-marketplace-l88mg\" (UID: \"fb1500ae-62df-4106-92a9-292e4a530f9b\") " pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.041038 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1500ae-62df-4106-92a9-292e4a530f9b-catalog-content\") pod \"redhat-marketplace-l88mg\" (UID: \"fb1500ae-62df-4106-92a9-292e4a530f9b\") " pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.041077 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1500ae-62df-4106-92a9-292e4a530f9b-utilities\") pod \"redhat-marketplace-l88mg\" (UID: \"fb1500ae-62df-4106-92a9-292e4a530f9b\") " pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.050911 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5mt74"] Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.074110 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.119254 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4449\" (UniqueName: \"kubernetes.io/projected/fb1500ae-62df-4106-92a9-292e4a530f9b-kube-api-access-d4449\") pod \"redhat-marketplace-l88mg\" (UID: \"fb1500ae-62df-4106-92a9-292e4a530f9b\") " pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.177687 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5mt74"] Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.184848 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.250571 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808052e2-d521-47dd-afa0-09caec857462-catalog-content\") pod \"community-operators-5mt74\" (UID: \"808052e2-d521-47dd-afa0-09caec857462\") " pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.250671 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gqvs\" (UniqueName: \"kubernetes.io/projected/808052e2-d521-47dd-afa0-09caec857462-kube-api-access-5gqvs\") pod \"community-operators-5mt74\" (UID: \"808052e2-d521-47dd-afa0-09caec857462\") " pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.250793 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808052e2-d521-47dd-afa0-09caec857462-utilities\") pod \"community-operators-5mt74\" (UID: \"808052e2-d521-47dd-afa0-09caec857462\") " pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.352447 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gqvs\" (UniqueName: \"kubernetes.io/projected/808052e2-d521-47dd-afa0-09caec857462-kube-api-access-5gqvs\") pod \"community-operators-5mt74\" (UID: \"808052e2-d521-47dd-afa0-09caec857462\") " pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.352847 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808052e2-d521-47dd-afa0-09caec857462-utilities\") pod \"community-operators-5mt74\" (UID: \"808052e2-d521-47dd-afa0-09caec857462\") " pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.352902 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808052e2-d521-47dd-afa0-09caec857462-catalog-content\") pod \"community-operators-5mt74\" (UID: \"808052e2-d521-47dd-afa0-09caec857462\") " pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.353540 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808052e2-d521-47dd-afa0-09caec857462-catalog-content\") pod \"community-operators-5mt74\" (UID: \"808052e2-d521-47dd-afa0-09caec857462\") " pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.353742 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808052e2-d521-47dd-afa0-09caec857462-utilities\") pod \"community-operators-5mt74\" (UID: \"808052e2-d521-47dd-afa0-09caec857462\") " pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.375079 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gqvs\" (UniqueName: \"kubernetes.io/projected/808052e2-d521-47dd-afa0-09caec857462-kube-api-access-5gqvs\") pod \"community-operators-5mt74\" (UID: \"808052e2-d521-47dd-afa0-09caec857462\") " pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.441554 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:59:29 crc kubenswrapper[4810]: E0219 16:59:29.441923 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.497795 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:29 crc kubenswrapper[4810]: I0219 16:59:29.746407 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l88mg"] Feb 19 16:59:29 crc kubenswrapper[4810]: W0219 16:59:29.773472 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb1500ae_62df_4106_92a9_292e4a530f9b.slice/crio-102f4f274c72147c1a6a8c6c1f5977a776f1840f276ae532338949bb1735f657 WatchSource:0}: Error finding container 102f4f274c72147c1a6a8c6c1f5977a776f1840f276ae532338949bb1735f657: Status 404 returned error can't find the container with id 102f4f274c72147c1a6a8c6c1f5977a776f1840f276ae532338949bb1735f657 Feb 19 16:59:30 crc kubenswrapper[4810]: I0219 16:59:30.030449 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5mt74"] Feb 19 16:59:30 crc kubenswrapper[4810]: I0219 16:59:30.160484 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mt74" event={"ID":"808052e2-d521-47dd-afa0-09caec857462","Type":"ContainerStarted","Data":"6e87026d9bcbe6b26d8857e5d141664c1b740983c8945dad943f6e6a6bd89c0e"} Feb 19 16:59:30 crc kubenswrapper[4810]: I0219 16:59:30.163039 4810 generic.go:334] "Generic (PLEG): container finished" podID="fb1500ae-62df-4106-92a9-292e4a530f9b" containerID="184b8410364efc8dd2bbe2c66a5da731716998db41b931676b3f37f11a279a52" exitCode=0 Feb 19 16:59:30 crc kubenswrapper[4810]: I0219 16:59:30.163072 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l88mg" event={"ID":"fb1500ae-62df-4106-92a9-292e4a530f9b","Type":"ContainerDied","Data":"184b8410364efc8dd2bbe2c66a5da731716998db41b931676b3f37f11a279a52"} Feb 19 16:59:30 crc kubenswrapper[4810]: I0219 16:59:30.163091 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l88mg" event={"ID":"fb1500ae-62df-4106-92a9-292e4a530f9b","Type":"ContainerStarted","Data":"102f4f274c72147c1a6a8c6c1f5977a776f1840f276ae532338949bb1735f657"} Feb 19 16:59:31 crc kubenswrapper[4810]: I0219 16:59:31.188304 4810 generic.go:334] "Generic (PLEG): container finished" podID="808052e2-d521-47dd-afa0-09caec857462" containerID="5cfcb4b39748777ff912a8c1578b8da3c2c348b0f537f5a9d085ed4073373a1c" exitCode=0 Feb 19 16:59:31 crc kubenswrapper[4810]: I0219 16:59:31.188469 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mt74" event={"ID":"808052e2-d521-47dd-afa0-09caec857462","Type":"ContainerDied","Data":"5cfcb4b39748777ff912a8c1578b8da3c2c348b0f537f5a9d085ed4073373a1c"} Feb 19 16:59:32 crc kubenswrapper[4810]: I0219 16:59:32.207779 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l88mg" event={"ID":"fb1500ae-62df-4106-92a9-292e4a530f9b","Type":"ContainerStarted","Data":"087c65650316a2f00c7c8a503624ebcae641fced016361f533643bbd4682f836"} Feb 19 16:59:33 crc kubenswrapper[4810]: I0219 16:59:33.222837 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mt74" event={"ID":"808052e2-d521-47dd-afa0-09caec857462","Type":"ContainerStarted","Data":"00b0bfe8666df0ae7322bfa1858d66088b60c8375202d05eb06bee981a6fa745"} Feb 19 16:59:34 crc kubenswrapper[4810]: I0219 16:59:34.233156 4810 generic.go:334] "Generic (PLEG): container finished" podID="fb1500ae-62df-4106-92a9-292e4a530f9b" containerID="087c65650316a2f00c7c8a503624ebcae641fced016361f533643bbd4682f836" exitCode=0 Feb 19 16:59:34 crc kubenswrapper[4810]: I0219 16:59:34.233222 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l88mg" event={"ID":"fb1500ae-62df-4106-92a9-292e4a530f9b","Type":"ContainerDied","Data":"087c65650316a2f00c7c8a503624ebcae641fced016361f533643bbd4682f836"} Feb 19 16:59:35 crc kubenswrapper[4810]: I0219 16:59:35.246929 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l88mg" event={"ID":"fb1500ae-62df-4106-92a9-292e4a530f9b","Type":"ContainerStarted","Data":"3e1cb4584544667e6e0ebcdf16d948da751d43efcbc033f5ccb4b9f4f67cb6a2"} Feb 19 16:59:35 crc kubenswrapper[4810]: I0219 16:59:35.276571 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l88mg" podStartSLOduration=2.6360307499999998 podStartE2EDuration="7.276553779s" podCreationTimestamp="2026-02-19 16:59:28 +0000 UTC" firstStartedPulling="2026-02-19 16:59:30.165141546 +0000 UTC m=+6599.647171670" lastFinishedPulling="2026-02-19 16:59:34.805664545 +0000 UTC m=+6604.287694699" observedRunningTime="2026-02-19 16:59:35.270171962 +0000 UTC m=+6604.752202106" watchObservedRunningTime="2026-02-19 16:59:35.276553779 +0000 UTC m=+6604.758583903" Feb 19 16:59:36 crc kubenswrapper[4810]: I0219 16:59:36.258990 4810 generic.go:334] "Generic (PLEG): container finished" podID="808052e2-d521-47dd-afa0-09caec857462" containerID="00b0bfe8666df0ae7322bfa1858d66088b60c8375202d05eb06bee981a6fa745" exitCode=0 Feb 19 16:59:36 crc kubenswrapper[4810]: I0219 16:59:36.259066 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mt74" event={"ID":"808052e2-d521-47dd-afa0-09caec857462","Type":"ContainerDied","Data":"00b0bfe8666df0ae7322bfa1858d66088b60c8375202d05eb06bee981a6fa745"} Feb 19 16:59:38 crc kubenswrapper[4810]: I0219 16:59:38.297546 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mt74" event={"ID":"808052e2-d521-47dd-afa0-09caec857462","Type":"ContainerStarted","Data":"af9a979525ef507298249d3ca6e031c44578c0f6537e1c2586a5d6e69cd1a9cd"} Feb 19 16:59:38 crc kubenswrapper[4810]: I0219 16:59:38.336960 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5mt74" podStartSLOduration=3.550946484 podStartE2EDuration="9.336930263s" podCreationTimestamp="2026-02-19 16:59:29 +0000 UTC" firstStartedPulling="2026-02-19 16:59:31.190590875 +0000 UTC m=+6600.672620999" lastFinishedPulling="2026-02-19 16:59:36.976574614 +0000 UTC m=+6606.458604778" observedRunningTime="2026-02-19 16:59:38.320647172 +0000 UTC m=+6607.802677316" watchObservedRunningTime="2026-02-19 16:59:38.336930263 +0000 UTC m=+6607.818960397" Feb 19 16:59:39 crc kubenswrapper[4810]: I0219 16:59:39.185608 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:39 crc kubenswrapper[4810]: I0219 16:59:39.187292 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:39 crc kubenswrapper[4810]: I0219 16:59:39.255772 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:39 crc kubenswrapper[4810]: I0219 16:59:39.499537 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:39 crc kubenswrapper[4810]: I0219 16:59:39.499588 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:40 crc kubenswrapper[4810]: I0219 16:59:40.549167 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5mt74" podUID="808052e2-d521-47dd-afa0-09caec857462" containerName="registry-server" probeResult="failure" output=< Feb 19 16:59:40 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 16:59:40 crc kubenswrapper[4810]: > Feb 19 16:59:44 crc kubenswrapper[4810]: I0219 16:59:44.440415 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:59:44 crc kubenswrapper[4810]: E0219 16:59:44.441272 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 16:59:45 crc kubenswrapper[4810]: I0219 16:59:45.755299 4810 scope.go:117] "RemoveContainer" containerID="35e06e2cca1fb991fa940df76a9f88f0b3d758223a060db89405e9f9e28e0bdb" Feb 19 16:59:45 crc kubenswrapper[4810]: I0219 16:59:45.787518 4810 scope.go:117] "RemoveContainer" containerID="07153056d19231ca3d7ab4d1dd2cd03f17f8e8578ea20e8a31486c974d0f0b3d" Feb 19 16:59:46 crc kubenswrapper[4810]: I0219 16:59:46.177583 4810 scope.go:117] "RemoveContainer" containerID="c35e5c9c4b5ddac624677f837531b6c053c31b4d14fb41f66bcb860eccf31d9d" Feb 19 16:59:49 crc kubenswrapper[4810]: I0219 16:59:49.257586 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:49 crc kubenswrapper[4810]: I0219 16:59:49.327102 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l88mg"] Feb 19 16:59:49 crc kubenswrapper[4810]: I0219 16:59:49.433189 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l88mg" podUID="fb1500ae-62df-4106-92a9-292e4a530f9b" containerName="registry-server" containerID="cri-o://3e1cb4584544667e6e0ebcdf16d948da751d43efcbc033f5ccb4b9f4f67cb6a2" gracePeriod=2 Feb 19 16:59:49 crc kubenswrapper[4810]: I0219 16:59:49.555126 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:49 crc kubenswrapper[4810]: I0219 16:59:49.623386 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:49 crc kubenswrapper[4810]: I0219 16:59:49.954942 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.052422 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4449\" (UniqueName: \"kubernetes.io/projected/fb1500ae-62df-4106-92a9-292e4a530f9b-kube-api-access-d4449\") pod \"fb1500ae-62df-4106-92a9-292e4a530f9b\" (UID: \"fb1500ae-62df-4106-92a9-292e4a530f9b\") " Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.052682 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1500ae-62df-4106-92a9-292e4a530f9b-utilities\") pod \"fb1500ae-62df-4106-92a9-292e4a530f9b\" (UID: \"fb1500ae-62df-4106-92a9-292e4a530f9b\") " Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.052965 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1500ae-62df-4106-92a9-292e4a530f9b-catalog-content\") pod \"fb1500ae-62df-4106-92a9-292e4a530f9b\" (UID: \"fb1500ae-62df-4106-92a9-292e4a530f9b\") " Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.057173 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb1500ae-62df-4106-92a9-292e4a530f9b-utilities" (OuterVolumeSpecName: "utilities") pod "fb1500ae-62df-4106-92a9-292e4a530f9b" (UID: "fb1500ae-62df-4106-92a9-292e4a530f9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.063868 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1500ae-62df-4106-92a9-292e4a530f9b-kube-api-access-d4449" (OuterVolumeSpecName: "kube-api-access-d4449") pod "fb1500ae-62df-4106-92a9-292e4a530f9b" (UID: "fb1500ae-62df-4106-92a9-292e4a530f9b"). InnerVolumeSpecName "kube-api-access-d4449". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.078277 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb1500ae-62df-4106-92a9-292e4a530f9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb1500ae-62df-4106-92a9-292e4a530f9b" (UID: "fb1500ae-62df-4106-92a9-292e4a530f9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.155737 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4449\" (UniqueName: \"kubernetes.io/projected/fb1500ae-62df-4106-92a9-292e4a530f9b-kube-api-access-d4449\") on node \"crc\" DevicePath \"\"" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.155985 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1500ae-62df-4106-92a9-292e4a530f9b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.156050 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1500ae-62df-4106-92a9-292e4a530f9b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.452928 4810 generic.go:334] "Generic (PLEG): container finished" podID="fb1500ae-62df-4106-92a9-292e4a530f9b" containerID="3e1cb4584544667e6e0ebcdf16d948da751d43efcbc033f5ccb4b9f4f67cb6a2" exitCode=0 Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.454557 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l88mg" event={"ID":"fb1500ae-62df-4106-92a9-292e4a530f9b","Type":"ContainerDied","Data":"3e1cb4584544667e6e0ebcdf16d948da751d43efcbc033f5ccb4b9f4f67cb6a2"} Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.454618 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l88mg" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.454667 4810 scope.go:117] "RemoveContainer" containerID="3e1cb4584544667e6e0ebcdf16d948da751d43efcbc033f5ccb4b9f4f67cb6a2" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.454648 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l88mg" event={"ID":"fb1500ae-62df-4106-92a9-292e4a530f9b","Type":"ContainerDied","Data":"102f4f274c72147c1a6a8c6c1f5977a776f1840f276ae532338949bb1735f657"} Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.500199 4810 scope.go:117] "RemoveContainer" containerID="087c65650316a2f00c7c8a503624ebcae641fced016361f533643bbd4682f836" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.511814 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l88mg"] Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.530858 4810 scope.go:117] "RemoveContainer" containerID="184b8410364efc8dd2bbe2c66a5da731716998db41b931676b3f37f11a279a52" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.534687 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l88mg"] Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.578209 4810 scope.go:117] "RemoveContainer" containerID="3e1cb4584544667e6e0ebcdf16d948da751d43efcbc033f5ccb4b9f4f67cb6a2" Feb 19 16:59:50 crc kubenswrapper[4810]: E0219 16:59:50.579068 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e1cb4584544667e6e0ebcdf16d948da751d43efcbc033f5ccb4b9f4f67cb6a2\": container with ID starting with 3e1cb4584544667e6e0ebcdf16d948da751d43efcbc033f5ccb4b9f4f67cb6a2 not found: ID does not exist" containerID="3e1cb4584544667e6e0ebcdf16d948da751d43efcbc033f5ccb4b9f4f67cb6a2" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.579107 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e1cb4584544667e6e0ebcdf16d948da751d43efcbc033f5ccb4b9f4f67cb6a2"} err="failed to get container status \"3e1cb4584544667e6e0ebcdf16d948da751d43efcbc033f5ccb4b9f4f67cb6a2\": rpc error: code = NotFound desc = could not find container \"3e1cb4584544667e6e0ebcdf16d948da751d43efcbc033f5ccb4b9f4f67cb6a2\": container with ID starting with 3e1cb4584544667e6e0ebcdf16d948da751d43efcbc033f5ccb4b9f4f67cb6a2 not found: ID does not exist" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.579138 4810 scope.go:117] "RemoveContainer" containerID="087c65650316a2f00c7c8a503624ebcae641fced016361f533643bbd4682f836" Feb 19 16:59:50 crc kubenswrapper[4810]: E0219 16:59:50.580078 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087c65650316a2f00c7c8a503624ebcae641fced016361f533643bbd4682f836\": container with ID starting with 087c65650316a2f00c7c8a503624ebcae641fced016361f533643bbd4682f836 not found: ID does not exist" containerID="087c65650316a2f00c7c8a503624ebcae641fced016361f533643bbd4682f836" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.580113 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087c65650316a2f00c7c8a503624ebcae641fced016361f533643bbd4682f836"} err="failed to get container status \"087c65650316a2f00c7c8a503624ebcae641fced016361f533643bbd4682f836\": rpc error: code = NotFound desc = could not find container \"087c65650316a2f00c7c8a503624ebcae641fced016361f533643bbd4682f836\": container with ID starting with 087c65650316a2f00c7c8a503624ebcae641fced016361f533643bbd4682f836 not found: ID does not exist" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.580312 4810 scope.go:117] "RemoveContainer" containerID="184b8410364efc8dd2bbe2c66a5da731716998db41b931676b3f37f11a279a52" Feb 19 16:59:50 crc kubenswrapper[4810]: E0219 16:59:50.581236 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"184b8410364efc8dd2bbe2c66a5da731716998db41b931676b3f37f11a279a52\": container with ID starting with 184b8410364efc8dd2bbe2c66a5da731716998db41b931676b3f37f11a279a52 not found: ID does not exist" containerID="184b8410364efc8dd2bbe2c66a5da731716998db41b931676b3f37f11a279a52" Feb 19 16:59:50 crc kubenswrapper[4810]: I0219 16:59:50.581296 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184b8410364efc8dd2bbe2c66a5da731716998db41b931676b3f37f11a279a52"} err="failed to get container status \"184b8410364efc8dd2bbe2c66a5da731716998db41b931676b3f37f11a279a52\": rpc error: code = NotFound desc = could not find container \"184b8410364efc8dd2bbe2c66a5da731716998db41b931676b3f37f11a279a52\": container with ID starting with 184b8410364efc8dd2bbe2c66a5da731716998db41b931676b3f37f11a279a52 not found: ID does not exist" Feb 19 16:59:51 crc kubenswrapper[4810]: I0219 16:59:51.459690 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb1500ae-62df-4106-92a9-292e4a530f9b" path="/var/lib/kubelet/pods/fb1500ae-62df-4106-92a9-292e4a530f9b/volumes" Feb 19 16:59:51 crc kubenswrapper[4810]: I0219 16:59:51.715454 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5mt74"] Feb 19 16:59:51 crc kubenswrapper[4810]: I0219 16:59:51.715966 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5mt74" podUID="808052e2-d521-47dd-afa0-09caec857462" containerName="registry-server" containerID="cri-o://af9a979525ef507298249d3ca6e031c44578c0f6537e1c2586a5d6e69cd1a9cd" gracePeriod=2 Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.288296 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.309354 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808052e2-d521-47dd-afa0-09caec857462-utilities\") pod \"808052e2-d521-47dd-afa0-09caec857462\" (UID: \"808052e2-d521-47dd-afa0-09caec857462\") " Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.309492 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808052e2-d521-47dd-afa0-09caec857462-catalog-content\") pod \"808052e2-d521-47dd-afa0-09caec857462\" (UID: \"808052e2-d521-47dd-afa0-09caec857462\") " Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.309632 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gqvs\" (UniqueName: \"kubernetes.io/projected/808052e2-d521-47dd-afa0-09caec857462-kube-api-access-5gqvs\") pod \"808052e2-d521-47dd-afa0-09caec857462\" (UID: \"808052e2-d521-47dd-afa0-09caec857462\") " Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.315346 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/808052e2-d521-47dd-afa0-09caec857462-kube-api-access-5gqvs" (OuterVolumeSpecName: "kube-api-access-5gqvs") pod "808052e2-d521-47dd-afa0-09caec857462" (UID: "808052e2-d521-47dd-afa0-09caec857462"). InnerVolumeSpecName "kube-api-access-5gqvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.322401 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808052e2-d521-47dd-afa0-09caec857462-utilities" (OuterVolumeSpecName: "utilities") pod "808052e2-d521-47dd-afa0-09caec857462" (UID: "808052e2-d521-47dd-afa0-09caec857462"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.374255 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808052e2-d521-47dd-afa0-09caec857462-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "808052e2-d521-47dd-afa0-09caec857462" (UID: "808052e2-d521-47dd-afa0-09caec857462"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.412200 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808052e2-d521-47dd-afa0-09caec857462-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.412242 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808052e2-d521-47dd-afa0-09caec857462-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.412259 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gqvs\" (UniqueName: \"kubernetes.io/projected/808052e2-d521-47dd-afa0-09caec857462-kube-api-access-5gqvs\") on node \"crc\" DevicePath \"\"" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.488023 4810 generic.go:334] "Generic (PLEG): container finished" podID="808052e2-d521-47dd-afa0-09caec857462" containerID="af9a979525ef507298249d3ca6e031c44578c0f6537e1c2586a5d6e69cd1a9cd" exitCode=0 Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.488087 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mt74" event={"ID":"808052e2-d521-47dd-afa0-09caec857462","Type":"ContainerDied","Data":"af9a979525ef507298249d3ca6e031c44578c0f6537e1c2586a5d6e69cd1a9cd"} Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.488101 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mt74" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.488128 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mt74" event={"ID":"808052e2-d521-47dd-afa0-09caec857462","Type":"ContainerDied","Data":"6e87026d9bcbe6b26d8857e5d141664c1b740983c8945dad943f6e6a6bd89c0e"} Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.488157 4810 scope.go:117] "RemoveContainer" containerID="af9a979525ef507298249d3ca6e031c44578c0f6537e1c2586a5d6e69cd1a9cd" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.526726 4810 scope.go:117] "RemoveContainer" containerID="00b0bfe8666df0ae7322bfa1858d66088b60c8375202d05eb06bee981a6fa745" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.539700 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5mt74"] Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.550033 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5mt74"] Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.568167 4810 scope.go:117] "RemoveContainer" containerID="5cfcb4b39748777ff912a8c1578b8da3c2c348b0f537f5a9d085ed4073373a1c" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.637876 4810 scope.go:117] "RemoveContainer" containerID="af9a979525ef507298249d3ca6e031c44578c0f6537e1c2586a5d6e69cd1a9cd" Feb 19 16:59:52 crc kubenswrapper[4810]: E0219 16:59:52.638520 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af9a979525ef507298249d3ca6e031c44578c0f6537e1c2586a5d6e69cd1a9cd\": container with ID starting with af9a979525ef507298249d3ca6e031c44578c0f6537e1c2586a5d6e69cd1a9cd not found: ID does not exist" containerID="af9a979525ef507298249d3ca6e031c44578c0f6537e1c2586a5d6e69cd1a9cd" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.638561 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af9a979525ef507298249d3ca6e031c44578c0f6537e1c2586a5d6e69cd1a9cd"} err="failed to get container status \"af9a979525ef507298249d3ca6e031c44578c0f6537e1c2586a5d6e69cd1a9cd\": rpc error: code = NotFound desc = could not find container \"af9a979525ef507298249d3ca6e031c44578c0f6537e1c2586a5d6e69cd1a9cd\": container with ID starting with af9a979525ef507298249d3ca6e031c44578c0f6537e1c2586a5d6e69cd1a9cd not found: ID does not exist" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.638604 4810 scope.go:117] "RemoveContainer" containerID="00b0bfe8666df0ae7322bfa1858d66088b60c8375202d05eb06bee981a6fa745" Feb 19 16:59:52 crc kubenswrapper[4810]: E0219 16:59:52.639089 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00b0bfe8666df0ae7322bfa1858d66088b60c8375202d05eb06bee981a6fa745\": container with ID starting with 00b0bfe8666df0ae7322bfa1858d66088b60c8375202d05eb06bee981a6fa745 not found: ID does not exist" containerID="00b0bfe8666df0ae7322bfa1858d66088b60c8375202d05eb06bee981a6fa745" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.639123 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00b0bfe8666df0ae7322bfa1858d66088b60c8375202d05eb06bee981a6fa745"} err="failed to get container status \"00b0bfe8666df0ae7322bfa1858d66088b60c8375202d05eb06bee981a6fa745\": rpc error: code = NotFound desc = could not find container \"00b0bfe8666df0ae7322bfa1858d66088b60c8375202d05eb06bee981a6fa745\": container with ID starting with 00b0bfe8666df0ae7322bfa1858d66088b60c8375202d05eb06bee981a6fa745 not found: ID does not exist" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.639144 4810 scope.go:117] "RemoveContainer" containerID="5cfcb4b39748777ff912a8c1578b8da3c2c348b0f537f5a9d085ed4073373a1c" Feb 19 16:59:52 crc kubenswrapper[4810]: E0219 16:59:52.639648 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cfcb4b39748777ff912a8c1578b8da3c2c348b0f537f5a9d085ed4073373a1c\": container with ID starting with 5cfcb4b39748777ff912a8c1578b8da3c2c348b0f537f5a9d085ed4073373a1c not found: ID does not exist" containerID="5cfcb4b39748777ff912a8c1578b8da3c2c348b0f537f5a9d085ed4073373a1c" Feb 19 16:59:52 crc kubenswrapper[4810]: I0219 16:59:52.639682 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cfcb4b39748777ff912a8c1578b8da3c2c348b0f537f5a9d085ed4073373a1c"} err="failed to get container status \"5cfcb4b39748777ff912a8c1578b8da3c2c348b0f537f5a9d085ed4073373a1c\": rpc error: code = NotFound desc = could not find container \"5cfcb4b39748777ff912a8c1578b8da3c2c348b0f537f5a9d085ed4073373a1c\": container with ID starting with 5cfcb4b39748777ff912a8c1578b8da3c2c348b0f537f5a9d085ed4073373a1c not found: ID does not exist" Feb 19 16:59:53 crc kubenswrapper[4810]: I0219 16:59:53.496200 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="808052e2-d521-47dd-afa0-09caec857462" path="/var/lib/kubelet/pods/808052e2-d521-47dd-afa0-09caec857462/volumes" Feb 19 16:59:58 crc kubenswrapper[4810]: I0219 16:59:58.439285 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 16:59:58 crc kubenswrapper[4810]: E0219 16:59:58.440292 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.160692 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56"] Feb 19 17:00:00 crc kubenswrapper[4810]: E0219 17:00:00.161404 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1500ae-62df-4106-92a9-292e4a530f9b" containerName="extract-content" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.161420 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1500ae-62df-4106-92a9-292e4a530f9b" containerName="extract-content" Feb 19 17:00:00 crc kubenswrapper[4810]: E0219 17:00:00.161463 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1500ae-62df-4106-92a9-292e4a530f9b" containerName="extract-utilities" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.161472 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1500ae-62df-4106-92a9-292e4a530f9b" containerName="extract-utilities" Feb 19 17:00:00 crc kubenswrapper[4810]: E0219 17:00:00.161495 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808052e2-d521-47dd-afa0-09caec857462" containerName="extract-content" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.161503 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="808052e2-d521-47dd-afa0-09caec857462" containerName="extract-content" Feb 19 17:00:00 crc kubenswrapper[4810]: E0219 17:00:00.161528 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808052e2-d521-47dd-afa0-09caec857462" containerName="registry-server" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.161538 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="808052e2-d521-47dd-afa0-09caec857462" containerName="registry-server" Feb 19 17:00:00 crc kubenswrapper[4810]: E0219 17:00:00.161553 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808052e2-d521-47dd-afa0-09caec857462" containerName="extract-utilities" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.161560 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="808052e2-d521-47dd-afa0-09caec857462" containerName="extract-utilities" Feb 19 17:00:00 crc kubenswrapper[4810]: E0219 17:00:00.161574 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1500ae-62df-4106-92a9-292e4a530f9b" containerName="registry-server" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.161583 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1500ae-62df-4106-92a9-292e4a530f9b" containerName="registry-server" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.161810 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1500ae-62df-4106-92a9-292e4a530f9b" containerName="registry-server" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.161836 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="808052e2-d521-47dd-afa0-09caec857462" containerName="registry-server" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.162614 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.164614 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.165676 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.173026 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56"] Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.291547 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-config-volume\") pod \"collect-profiles-29525340-89f56\" (UID: \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.291915 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-secret-volume\") pod \"collect-profiles-29525340-89f56\" (UID: \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.291980 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ksrp\" (UniqueName: \"kubernetes.io/projected/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-kube-api-access-8ksrp\") pod \"collect-profiles-29525340-89f56\" (UID: \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.394386 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-secret-volume\") pod \"collect-profiles-29525340-89f56\" (UID: \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.394449 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ksrp\" (UniqueName: \"kubernetes.io/projected/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-kube-api-access-8ksrp\") pod \"collect-profiles-29525340-89f56\" (UID: \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.394498 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-config-volume\") pod \"collect-profiles-29525340-89f56\" (UID: \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.396193 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-config-volume\") pod \"collect-profiles-29525340-89f56\" (UID: \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.406781 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-secret-volume\") pod \"collect-profiles-29525340-89f56\" (UID: \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.410008 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ksrp\" (UniqueName: \"kubernetes.io/projected/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-kube-api-access-8ksrp\") pod \"collect-profiles-29525340-89f56\" (UID: \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" Feb 19 17:00:00 crc kubenswrapper[4810]: I0219 17:00:00.500965 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" Feb 19 17:00:01 crc kubenswrapper[4810]: I0219 17:00:01.048744 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56"] Feb 19 17:00:01 crc kubenswrapper[4810]: I0219 17:00:01.589030 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" event={"ID":"bcf416e2-d4f9-491d-945e-4c1ae3bf3249","Type":"ContainerStarted","Data":"c6bd58f7b811b8f09702843fce0debaf048c5b58f77fccad890e4b669750caa7"} Feb 19 17:00:01 crc kubenswrapper[4810]: I0219 17:00:01.589404 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" event={"ID":"bcf416e2-d4f9-491d-945e-4c1ae3bf3249","Type":"ContainerStarted","Data":"71a910cf5f3b1bb8ed4465276f60e954d96bfc4e8d308aef1368dfbe0291668a"} Feb 19 17:00:01 crc kubenswrapper[4810]: I0219 17:00:01.628926 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" podStartSLOduration=1.6289033590000002 podStartE2EDuration="1.628903359s" podCreationTimestamp="2026-02-19 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 17:00:01.606873987 +0000 UTC m=+6631.088904111" watchObservedRunningTime="2026-02-19 17:00:01.628903359 +0000 UTC m=+6631.110933483" Feb 19 17:00:02 crc kubenswrapper[4810]: I0219 17:00:02.603961 4810 generic.go:334] "Generic (PLEG): container finished" podID="bcf416e2-d4f9-491d-945e-4c1ae3bf3249" containerID="c6bd58f7b811b8f09702843fce0debaf048c5b58f77fccad890e4b669750caa7" exitCode=0 Feb 19 17:00:02 crc kubenswrapper[4810]: I0219 17:00:02.604039 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" event={"ID":"bcf416e2-d4f9-491d-945e-4c1ae3bf3249","Type":"ContainerDied","Data":"c6bd58f7b811b8f09702843fce0debaf048c5b58f77fccad890e4b669750caa7"} Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.117224 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.182082 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-secret-volume\") pod \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\" (UID: \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\") " Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.182136 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-config-volume\") pod \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\" (UID: \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\") " Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.182259 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ksrp\" (UniqueName: \"kubernetes.io/projected/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-kube-api-access-8ksrp\") pod \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\" (UID: \"bcf416e2-d4f9-491d-945e-4c1ae3bf3249\") " Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.183267 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-config-volume" (OuterVolumeSpecName: "config-volume") pod "bcf416e2-d4f9-491d-945e-4c1ae3bf3249" (UID: "bcf416e2-d4f9-491d-945e-4c1ae3bf3249"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.188869 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bcf416e2-d4f9-491d-945e-4c1ae3bf3249" (UID: "bcf416e2-d4f9-491d-945e-4c1ae3bf3249"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.196590 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-kube-api-access-8ksrp" (OuterVolumeSpecName: "kube-api-access-8ksrp") pod "bcf416e2-d4f9-491d-945e-4c1ae3bf3249" (UID: "bcf416e2-d4f9-491d-945e-4c1ae3bf3249"). InnerVolumeSpecName "kube-api-access-8ksrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.284274 4810 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.284309 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.284319 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ksrp\" (UniqueName: \"kubernetes.io/projected/bcf416e2-d4f9-491d-945e-4c1ae3bf3249-kube-api-access-8ksrp\") on node \"crc\" DevicePath \"\"" Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.583188 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq"] Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.596124 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525295-b9xtq"] Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.629985 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" event={"ID":"bcf416e2-d4f9-491d-945e-4c1ae3bf3249","Type":"ContainerDied","Data":"71a910cf5f3b1bb8ed4465276f60e954d96bfc4e8d308aef1368dfbe0291668a"} Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.630048 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71a910cf5f3b1bb8ed4465276f60e954d96bfc4e8d308aef1368dfbe0291668a" Feb 19 17:00:04 crc kubenswrapper[4810]: I0219 17:00:04.630080 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525340-89f56" Feb 19 17:00:05 crc kubenswrapper[4810]: I0219 17:00:05.452493 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99fb536c-bd62-47d3-87d6-9f56d3e51f72" path="/var/lib/kubelet/pods/99fb536c-bd62-47d3-87d6-9f56d3e51f72/volumes" Feb 19 17:00:10 crc kubenswrapper[4810]: I0219 17:00:10.439545 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 17:00:10 crc kubenswrapper[4810]: E0219 17:00:10.440264 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 17:00:22 crc kubenswrapper[4810]: I0219 17:00:22.442808 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 17:00:22 crc kubenswrapper[4810]: E0219 17:00:22.443953 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 17:00:33 crc kubenswrapper[4810]: I0219 17:00:33.439171 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 17:00:33 crc kubenswrapper[4810]: E0219 17:00:33.439863 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 17:00:46 crc kubenswrapper[4810]: I0219 17:00:46.380246 4810 scope.go:117] "RemoveContainer" containerID="932479730c0a6c125151920519504a29cae36b5a433b12d52524833608c85c05" Feb 19 17:00:48 crc kubenswrapper[4810]: I0219 17:00:48.440001 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 17:00:48 crc kubenswrapper[4810]: E0219 17:00:48.441444 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 17:00:53 crc kubenswrapper[4810]: I0219 17:00:53.891794 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lswlw"] Feb 19 17:00:53 crc kubenswrapper[4810]: E0219 17:00:53.893272 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf416e2-d4f9-491d-945e-4c1ae3bf3249" containerName="collect-profiles" Feb 19 17:00:53 crc kubenswrapper[4810]: I0219 17:00:53.893294 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf416e2-d4f9-491d-945e-4c1ae3bf3249" containerName="collect-profiles" Feb 19 17:00:53 crc kubenswrapper[4810]: I0219 17:00:53.893738 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf416e2-d4f9-491d-945e-4c1ae3bf3249" containerName="collect-profiles" Feb 19 17:00:53 crc kubenswrapper[4810]: I0219 17:00:53.896472 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:00:53 crc kubenswrapper[4810]: I0219 17:00:53.930450 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lswlw"] Feb 19 17:00:54 crc kubenswrapper[4810]: I0219 17:00:54.027050 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8knm\" (UniqueName: \"kubernetes.io/projected/a8a71d42-1d52-4e61-a884-bb624373e783-kube-api-access-b8knm\") pod \"redhat-operators-lswlw\" (UID: \"a8a71d42-1d52-4e61-a884-bb624373e783\") " pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:00:54 crc kubenswrapper[4810]: I0219 17:00:54.027285 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8a71d42-1d52-4e61-a884-bb624373e783-utilities\") pod \"redhat-operators-lswlw\" (UID: \"a8a71d42-1d52-4e61-a884-bb624373e783\") " pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:00:54 crc kubenswrapper[4810]: I0219 17:00:54.027543 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8a71d42-1d52-4e61-a884-bb624373e783-catalog-content\") pod \"redhat-operators-lswlw\" (UID: \"a8a71d42-1d52-4e61-a884-bb624373e783\") " pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:00:54 crc kubenswrapper[4810]: I0219 17:00:54.129859 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8knm\" (UniqueName: \"kubernetes.io/projected/a8a71d42-1d52-4e61-a884-bb624373e783-kube-api-access-b8knm\") pod \"redhat-operators-lswlw\" (UID: \"a8a71d42-1d52-4e61-a884-bb624373e783\") " pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:00:54 crc kubenswrapper[4810]: I0219 17:00:54.129946 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8a71d42-1d52-4e61-a884-bb624373e783-utilities\") pod \"redhat-operators-lswlw\" (UID: \"a8a71d42-1d52-4e61-a884-bb624373e783\") " pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:00:54 crc kubenswrapper[4810]: I0219 17:00:54.129985 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8a71d42-1d52-4e61-a884-bb624373e783-catalog-content\") pod \"redhat-operators-lswlw\" (UID: \"a8a71d42-1d52-4e61-a884-bb624373e783\") " pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:00:54 crc kubenswrapper[4810]: I0219 17:00:54.130453 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8a71d42-1d52-4e61-a884-bb624373e783-catalog-content\") pod \"redhat-operators-lswlw\" (UID: \"a8a71d42-1d52-4e61-a884-bb624373e783\") " pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:00:54 crc kubenswrapper[4810]: I0219 17:00:54.130554 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8a71d42-1d52-4e61-a884-bb624373e783-utilities\") pod \"redhat-operators-lswlw\" (UID: \"a8a71d42-1d52-4e61-a884-bb624373e783\") " pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:00:54 crc kubenswrapper[4810]: I0219 17:00:54.148678 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8knm\" (UniqueName: \"kubernetes.io/projected/a8a71d42-1d52-4e61-a884-bb624373e783-kube-api-access-b8knm\") pod \"redhat-operators-lswlw\" (UID: \"a8a71d42-1d52-4e61-a884-bb624373e783\") " pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:00:54 crc kubenswrapper[4810]: I0219 17:00:54.240279 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:00:54 crc kubenswrapper[4810]: I0219 17:00:54.752921 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lswlw"] Feb 19 17:00:55 crc kubenswrapper[4810]: I0219 17:00:55.206700 4810 generic.go:334] "Generic (PLEG): container finished" podID="a8a71d42-1d52-4e61-a884-bb624373e783" containerID="d4e42978c3a6a470b88c9495d08ef7718003b3f791602697fe71e2d3574a2e01" exitCode=0 Feb 19 17:00:55 crc kubenswrapper[4810]: I0219 17:00:55.206770 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lswlw" event={"ID":"a8a71d42-1d52-4e61-a884-bb624373e783","Type":"ContainerDied","Data":"d4e42978c3a6a470b88c9495d08ef7718003b3f791602697fe71e2d3574a2e01"} Feb 19 17:00:55 crc kubenswrapper[4810]: I0219 17:00:55.207005 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lswlw" event={"ID":"a8a71d42-1d52-4e61-a884-bb624373e783","Type":"ContainerStarted","Data":"855fe77aae20a3a916acc865c1e7cb25af7b033cecc87ffd8153ba29c46257f8"} Feb 19 17:00:56 crc kubenswrapper[4810]: I0219 17:00:56.218004 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lswlw" event={"ID":"a8a71d42-1d52-4e61-a884-bb624373e783","Type":"ContainerStarted","Data":"f8b61cb82e6bbf75f041346f19b5205afe4541708e98ac877a4ce752aa9482c7"} Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.163176 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29525341-lgcg6"] Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.166426 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.177606 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525341-lgcg6"] Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.270847 4810 generic.go:334] "Generic (PLEG): container finished" podID="a8a71d42-1d52-4e61-a884-bb624373e783" containerID="f8b61cb82e6bbf75f041346f19b5205afe4541708e98ac877a4ce752aa9482c7" exitCode=0 Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.270899 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lswlw" event={"ID":"a8a71d42-1d52-4e61-a884-bb624373e783","Type":"ContainerDied","Data":"f8b61cb82e6bbf75f041346f19b5205afe4541708e98ac877a4ce752aa9482c7"} Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.271033 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-config-data\") pod \"keystone-cron-29525341-lgcg6\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.271276 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-fernet-keys\") pod \"keystone-cron-29525341-lgcg6\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.271355 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-combined-ca-bundle\") pod \"keystone-cron-29525341-lgcg6\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.271436 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm4tv\" (UniqueName: \"kubernetes.io/projected/55b3945c-6038-4494-bb56-0ec80ec66510-kube-api-access-gm4tv\") pod \"keystone-cron-29525341-lgcg6\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.373100 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-config-data\") pod \"keystone-cron-29525341-lgcg6\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.373565 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-fernet-keys\") pod \"keystone-cron-29525341-lgcg6\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.373604 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-combined-ca-bundle\") pod \"keystone-cron-29525341-lgcg6\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.373662 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm4tv\" (UniqueName: \"kubernetes.io/projected/55b3945c-6038-4494-bb56-0ec80ec66510-kube-api-access-gm4tv\") pod \"keystone-cron-29525341-lgcg6\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.380725 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-combined-ca-bundle\") pod \"keystone-cron-29525341-lgcg6\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.380988 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-config-data\") pod \"keystone-cron-29525341-lgcg6\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.396573 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-fernet-keys\") pod \"keystone-cron-29525341-lgcg6\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.398096 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm4tv\" (UniqueName: \"kubernetes.io/projected/55b3945c-6038-4494-bb56-0ec80ec66510-kube-api-access-gm4tv\") pod \"keystone-cron-29525341-lgcg6\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:00 crc kubenswrapper[4810]: I0219 17:01:00.500092 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:01 crc kubenswrapper[4810]: I0219 17:01:01.112359 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525341-lgcg6"] Feb 19 17:01:01 crc kubenswrapper[4810]: W0219 17:01:01.113615 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55b3945c_6038_4494_bb56_0ec80ec66510.slice/crio-6c211bd33da59f17b3fd9922d7f33b134093e0916a593721b360bdff400d312e WatchSource:0}: Error finding container 6c211bd33da59f17b3fd9922d7f33b134093e0916a593721b360bdff400d312e: Status 404 returned error can't find the container with id 6c211bd33da59f17b3fd9922d7f33b134093e0916a593721b360bdff400d312e Feb 19 17:01:01 crc kubenswrapper[4810]: I0219 17:01:01.285294 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525341-lgcg6" event={"ID":"55b3945c-6038-4494-bb56-0ec80ec66510","Type":"ContainerStarted","Data":"6c211bd33da59f17b3fd9922d7f33b134093e0916a593721b360bdff400d312e"} Feb 19 17:01:02 crc kubenswrapper[4810]: I0219 17:01:02.297773 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lswlw" event={"ID":"a8a71d42-1d52-4e61-a884-bb624373e783","Type":"ContainerStarted","Data":"13477726b2a82632cdf0e65bb570525ed9b08d31d20c93710785a2d9d52254a9"} Feb 19 17:01:02 crc kubenswrapper[4810]: I0219 17:01:02.299465 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525341-lgcg6" event={"ID":"55b3945c-6038-4494-bb56-0ec80ec66510","Type":"ContainerStarted","Data":"84a9eb8c272dc4fabb345cbeb55b5a7de0453811a5eb16c5fbd0f1464471b84a"} Feb 19 17:01:02 crc kubenswrapper[4810]: I0219 17:01:02.323533 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lswlw" podStartSLOduration=3.763602895 podStartE2EDuration="9.323510454s" podCreationTimestamp="2026-02-19 17:00:53 +0000 UTC" firstStartedPulling="2026-02-19 17:00:55.208315192 +0000 UTC m=+6684.690345316" lastFinishedPulling="2026-02-19 17:01:00.768222751 +0000 UTC m=+6690.250252875" observedRunningTime="2026-02-19 17:01:02.316523382 +0000 UTC m=+6691.798553526" watchObservedRunningTime="2026-02-19 17:01:02.323510454 +0000 UTC m=+6691.805540578" Feb 19 17:01:02 crc kubenswrapper[4810]: I0219 17:01:02.349856 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29525341-lgcg6" podStartSLOduration=2.34975703 podStartE2EDuration="2.34975703s" podCreationTimestamp="2026-02-19 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 17:01:02.341355083 +0000 UTC m=+6691.823385207" watchObservedRunningTime="2026-02-19 17:01:02.34975703 +0000 UTC m=+6691.831787164" Feb 19 17:01:03 crc kubenswrapper[4810]: I0219 17:01:03.440388 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 17:01:03 crc kubenswrapper[4810]: E0219 17:01:03.441144 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 17:01:04 crc kubenswrapper[4810]: I0219 17:01:04.241370 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:01:04 crc kubenswrapper[4810]: I0219 17:01:04.241805 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:01:05 crc kubenswrapper[4810]: I0219 17:01:05.297030 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lswlw" podUID="a8a71d42-1d52-4e61-a884-bb624373e783" containerName="registry-server" probeResult="failure" output=< Feb 19 17:01:05 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Feb 19 17:01:05 crc kubenswrapper[4810]: > Feb 19 17:01:07 crc kubenswrapper[4810]: I0219 17:01:07.354917 4810 generic.go:334] "Generic (PLEG): container finished" podID="55b3945c-6038-4494-bb56-0ec80ec66510" containerID="84a9eb8c272dc4fabb345cbeb55b5a7de0453811a5eb16c5fbd0f1464471b84a" exitCode=0 Feb 19 17:01:07 crc kubenswrapper[4810]: I0219 17:01:07.355362 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525341-lgcg6" event={"ID":"55b3945c-6038-4494-bb56-0ec80ec66510","Type":"ContainerDied","Data":"84a9eb8c272dc4fabb345cbeb55b5a7de0453811a5eb16c5fbd0f1464471b84a"} Feb 19 17:01:08 crc kubenswrapper[4810]: I0219 17:01:08.829622 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:08 crc kubenswrapper[4810]: I0219 17:01:08.993566 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-config-data\") pod \"55b3945c-6038-4494-bb56-0ec80ec66510\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " Feb 19 17:01:08 crc kubenswrapper[4810]: I0219 17:01:08.994668 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-combined-ca-bundle\") pod \"55b3945c-6038-4494-bb56-0ec80ec66510\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " Feb 19 17:01:08 crc kubenswrapper[4810]: I0219 17:01:08.994755 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm4tv\" (UniqueName: \"kubernetes.io/projected/55b3945c-6038-4494-bb56-0ec80ec66510-kube-api-access-gm4tv\") pod \"55b3945c-6038-4494-bb56-0ec80ec66510\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " Feb 19 17:01:08 crc kubenswrapper[4810]: I0219 17:01:08.994842 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-fernet-keys\") pod \"55b3945c-6038-4494-bb56-0ec80ec66510\" (UID: \"55b3945c-6038-4494-bb56-0ec80ec66510\") " Feb 19 17:01:08 crc kubenswrapper[4810]: I0219 17:01:08.999529 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b3945c-6038-4494-bb56-0ec80ec66510-kube-api-access-gm4tv" (OuterVolumeSpecName: "kube-api-access-gm4tv") pod "55b3945c-6038-4494-bb56-0ec80ec66510" (UID: "55b3945c-6038-4494-bb56-0ec80ec66510"). InnerVolumeSpecName "kube-api-access-gm4tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 17:01:09 crc kubenswrapper[4810]: I0219 17:01:09.003886 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "55b3945c-6038-4494-bb56-0ec80ec66510" (UID: "55b3945c-6038-4494-bb56-0ec80ec66510"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 17:01:09 crc kubenswrapper[4810]: I0219 17:01:09.032990 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55b3945c-6038-4494-bb56-0ec80ec66510" (UID: "55b3945c-6038-4494-bb56-0ec80ec66510"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 17:01:09 crc kubenswrapper[4810]: I0219 17:01:09.098960 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 17:01:09 crc kubenswrapper[4810]: I0219 17:01:09.099022 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm4tv\" (UniqueName: \"kubernetes.io/projected/55b3945c-6038-4494-bb56-0ec80ec66510-kube-api-access-gm4tv\") on node \"crc\" DevicePath \"\"" Feb 19 17:01:09 crc kubenswrapper[4810]: I0219 17:01:09.099046 4810 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 17:01:09 crc kubenswrapper[4810]: I0219 17:01:09.100709 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-config-data" (OuterVolumeSpecName: "config-data") pod "55b3945c-6038-4494-bb56-0ec80ec66510" (UID: "55b3945c-6038-4494-bb56-0ec80ec66510"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 17:01:09 crc kubenswrapper[4810]: I0219 17:01:09.201683 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b3945c-6038-4494-bb56-0ec80ec66510-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 17:01:09 crc kubenswrapper[4810]: I0219 17:01:09.379904 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525341-lgcg6" event={"ID":"55b3945c-6038-4494-bb56-0ec80ec66510","Type":"ContainerDied","Data":"6c211bd33da59f17b3fd9922d7f33b134093e0916a593721b360bdff400d312e"} Feb 19 17:01:09 crc kubenswrapper[4810]: I0219 17:01:09.379954 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c211bd33da59f17b3fd9922d7f33b134093e0916a593721b360bdff400d312e" Feb 19 17:01:09 crc kubenswrapper[4810]: I0219 17:01:09.380015 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525341-lgcg6" Feb 19 17:01:14 crc kubenswrapper[4810]: I0219 17:01:14.325782 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:01:14 crc kubenswrapper[4810]: I0219 17:01:14.410626 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:01:14 crc kubenswrapper[4810]: I0219 17:01:14.445481 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 17:01:14 crc kubenswrapper[4810]: E0219 17:01:14.448687 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.143867 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lswlw"] Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.144975 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lswlw" podUID="a8a71d42-1d52-4e61-a884-bb624373e783" containerName="registry-server" containerID="cri-o://13477726b2a82632cdf0e65bb570525ed9b08d31d20c93710785a2d9d52254a9" gracePeriod=2 Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.533051 4810 generic.go:334] "Generic (PLEG): container finished" podID="a8a71d42-1d52-4e61-a884-bb624373e783" containerID="13477726b2a82632cdf0e65bb570525ed9b08d31d20c93710785a2d9d52254a9" exitCode=0 Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.533317 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lswlw" event={"ID":"a8a71d42-1d52-4e61-a884-bb624373e783","Type":"ContainerDied","Data":"13477726b2a82632cdf0e65bb570525ed9b08d31d20c93710785a2d9d52254a9"} Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.533380 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lswlw" event={"ID":"a8a71d42-1d52-4e61-a884-bb624373e783","Type":"ContainerDied","Data":"855fe77aae20a3a916acc865c1e7cb25af7b033cecc87ffd8153ba29c46257f8"} Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.533394 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="855fe77aae20a3a916acc865c1e7cb25af7b033cecc87ffd8153ba29c46257f8" Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.604967 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.678912 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8a71d42-1d52-4e61-a884-bb624373e783-catalog-content\") pod \"a8a71d42-1d52-4e61-a884-bb624373e783\" (UID: \"a8a71d42-1d52-4e61-a884-bb624373e783\") " Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.679280 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8a71d42-1d52-4e61-a884-bb624373e783-utilities\") pod \"a8a71d42-1d52-4e61-a884-bb624373e783\" (UID: \"a8a71d42-1d52-4e61-a884-bb624373e783\") " Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.679831 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8a71d42-1d52-4e61-a884-bb624373e783-utilities" (OuterVolumeSpecName: "utilities") pod "a8a71d42-1d52-4e61-a884-bb624373e783" (UID: "a8a71d42-1d52-4e61-a884-bb624373e783"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.679814 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8knm\" (UniqueName: \"kubernetes.io/projected/a8a71d42-1d52-4e61-a884-bb624373e783-kube-api-access-b8knm\") pod \"a8a71d42-1d52-4e61-a884-bb624373e783\" (UID: \"a8a71d42-1d52-4e61-a884-bb624373e783\") " Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.680846 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8a71d42-1d52-4e61-a884-bb624373e783-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.696615 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a71d42-1d52-4e61-a884-bb624373e783-kube-api-access-b8knm" (OuterVolumeSpecName: "kube-api-access-b8knm") pod "a8a71d42-1d52-4e61-a884-bb624373e783" (UID: "a8a71d42-1d52-4e61-a884-bb624373e783"). InnerVolumeSpecName "kube-api-access-b8knm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.782394 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8knm\" (UniqueName: \"kubernetes.io/projected/a8a71d42-1d52-4e61-a884-bb624373e783-kube-api-access-b8knm\") on node \"crc\" DevicePath \"\"" Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.809675 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8a71d42-1d52-4e61-a884-bb624373e783-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8a71d42-1d52-4e61-a884-bb624373e783" (UID: "a8a71d42-1d52-4e61-a884-bb624373e783"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 17:01:19 crc kubenswrapper[4810]: I0219 17:01:19.884581 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8a71d42-1d52-4e61-a884-bb624373e783-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 17:01:20 crc kubenswrapper[4810]: I0219 17:01:20.541680 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lswlw" Feb 19 17:01:20 crc kubenswrapper[4810]: I0219 17:01:20.583913 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lswlw"] Feb 19 17:01:20 crc kubenswrapper[4810]: I0219 17:01:20.593065 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lswlw"] Feb 19 17:01:21 crc kubenswrapper[4810]: I0219 17:01:21.464942 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8a71d42-1d52-4e61-a884-bb624373e783" path="/var/lib/kubelet/pods/a8a71d42-1d52-4e61-a884-bb624373e783/volumes" Feb 19 17:01:28 crc kubenswrapper[4810]: I0219 17:01:28.439183 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 17:01:28 crc kubenswrapper[4810]: E0219 17:01:28.439940 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 17:01:41 crc kubenswrapper[4810]: I0219 17:01:41.453125 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 17:01:41 crc kubenswrapper[4810]: E0219 17:01:41.453863 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t499d_openshift-machine-config-operator(dfbf74ef-1e94-4826-8583-42b2e246ccf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-t499d" podUID="dfbf74ef-1e94-4826-8583-42b2e246ccf3" Feb 19 17:01:55 crc kubenswrapper[4810]: I0219 17:01:55.439666 4810 scope.go:117] "RemoveContainer" containerID="a0ba7d3250d70141c7fee7e449509a680dcfa78b2c2d7c2d1cd7f9464f6ba11b" Feb 19 17:01:56 crc kubenswrapper[4810]: I0219 17:01:56.016848 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t499d" event={"ID":"dfbf74ef-1e94-4826-8583-42b2e246ccf3","Type":"ContainerStarted","Data":"cb9191dca31827157c78f39124e1d9c08f5e7d9be848f27d1b0a60fa20d7dfe4"}